Non-text mode for pg_dumpall
Tom and Nathan opined recently that providing for non-text mode for
pg_dumpall would be a Good Thing (TM). Not having it has been a
long-standing complaint, so I've decided to give it a go.
I think we would need to restrict it to directory mode, at least to
begin with. I would have a toc.dat with a different magic block (say
"PGGLO" instead of "PGDMP") containing the global entries (roles,
tablespaces, databases). Then for each database there would be a
subdirectory (named for its toc entry) with a standard directory mode
dump for that database. These could be generated in parallel (possibly
by pg_dumpall calling pg_dump for each database). pg_restore on
detecting a global type toc.data would restore the globals and then each
of the databases (again possibly in parallel).
I'm sure there are many wrinkles I haven't thought of, but I don't see
any insurmountable obstacles, just a significant amount of code.
Barring the unforeseen my main is to have a preliminary patch by the
September CF.
Following that I would turn my attention to using it in pg_upgrade.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:
Tom and Nathan opined recently that providing for non-text mode for
pg_dumpall would be a Good Thing (TM). Not having it has been a
long-standing complaint, so I've decided to give it a go.
Thank you!
I think we would need to restrict it to directory mode, at least to begin
with. I would have a toc.dat with a different magic block (say "PGGLO"
instead of "PGDMP") containing the global entries (roles, tablespaces,
databases). Then for each database there would be a subdirectory (named for
its toc entry) with a standard directory mode dump for that database. These
could be generated in parallel (possibly by pg_dumpall calling pg_dump for
each database). pg_restore on detecting a global type toc.data would restore
the globals and then each of the databases (again possibly in parallel).
I'm curious why we couldn't also support the "custom" format.
Following that I would turn my attention to using it in pg_upgrade.
+1
--
nathan
On 2024-06-10 Mo 10:14, Nathan Bossart wrote:
On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:
Tom and Nathan opined recently that providing for non-text mode for
pg_dumpall would be a Good Thing (TM). Not having it has been a
long-standing complaint, so I've decided to give it a go.Thank you!
I think we would need to restrict it to directory mode, at least to begin
with. I would have a toc.dat with a different magic block (say "PGGLO"
instead of "PGDMP") containing the global entries (roles, tablespaces,
databases). Then for each database there would be a subdirectory (named for
its toc entry) with a standard directory mode dump for that database. These
could be generated in parallel (possibly by pg_dumpall calling pg_dump for
each database). pg_restore on detecting a global type toc.data would restore
the globals and then each of the databases (again possibly in parallel).I'm curious why we couldn't also support the "custom" format.
We could, but the housekeeping would be a bit harder. We'd need to keep
pointers to the offsets of the per-database TOCs (I don't want to have a
single per-cluster TOC). And we can't produce it in parallel, so I'd
rather start with something we can produce in parallel.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:
On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:
Tom and Nathan opined recently that providing for non-text mode for
pg_dumpall would be a Good Thing (TM). Not having it has been a
long-standing complaint, so I've decided to give it a go.Thank you!
Indeed, this has been quite annoying!
I think we would need to restrict it to directory mode, at least to begin
with. I would have a toc.dat with a different magic block (say "PGGLO"
instead of "PGDMP") containing the global entries (roles, tablespaces,
databases). Then for each database there would be a subdirectory (namedfor
its toc entry) with a standard directory mode dump for that database.
These
could be generated in parallel (possibly by pg_dumpall calling pg_dump
for
each database). pg_restore on detecting a global type toc.data would
restore
the globals and then each of the databases (again possibly in parallel).
I'm curious why we couldn't also support the "custom" format.
Or maybe even a combo - a directory of custom format files? Plus that one
special file being globals? I'd say that's what most use cases I've seen
would prefer.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On Mon, Jun 10, 2024 at 10:51:42AM -0400, Andrew Dunstan wrote:
On 2024-06-10 Mo 10:14, Nathan Bossart wrote:
I'm curious why we couldn't also support the "custom" format.
We could, but the housekeeping would be a bit harder. We'd need to keep
pointers to the offsets of the per-database TOCs (I don't want to have a
single per-cluster TOC). And we can't produce it in parallel, so I'd rather
start with something we can produce in parallel.
Got it.
--
nathan
On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:
On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:I'm curious why we couldn't also support the "custom" format.
Or maybe even a combo - a directory of custom format files? Plus that one
special file being globals? I'd say that's what most use cases I've seen
would prefer.
Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything? I know pg_upgrade uses "custom" mode for
each of the databases, so a combo approach would be a closer match to the
existing behavior, but that doesn't strike me as an especially strong
reason to keep doing it that way.
--
nathan
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:
On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:
On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com
wrote:
I'm curious why we couldn't also support the "custom" format.
Or maybe even a combo - a directory of custom format files? Plus that one
special file being globals? I'd say that's what most use cases I've seen
would prefer.Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything? I know pg_upgrade uses "custom" mode for
each of the databases, so a combo approach would be a closer match to the
existing behavior, but that doesn't strike me as an especially strong
reason to keep doing it that way.
A gazillion files to deal with? Much easier to work with individual custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.
It's not things that are make-it-or-break-it or anything, but there are
some smaller things that definitely can be useful.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On Mon, Jun 10, 2024 at 05:45:19PM +0200, Magnus Hagander wrote:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything? I know pg_upgrade uses "custom" mode for
each of the databases, so a combo approach would be a closer match to the
existing behavior, but that doesn't strike me as an especially strong
reason to keep doing it that way.A gazillion files to deal with? Much easier to work with individual custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.It's not things that are make-it-or-break-it or anything, but there are
some smaller things that definitely can be useful.
Makes sense, thanks for elaborating.
--
nathan
Magnus Hagander <magnus@hagander.net> writes:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything?
A gazillion files to deal with? Much easier to work with individual custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.
You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.
regards, tom lane
On 2024-06-10 Mo 12:21, Tom Lane wrote:
Magnus Hagander <magnus@hagander.net> writes:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com>
wrote:Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything?A gazillion files to deal with? Much easier to work with individual custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.
Yeah.
I think I can probably allow for Magnus' suggestion fairly easily, but
if I have to choose I'm going to go for the format that can be produced
with the maximum parallelism.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 6:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com
wrote:
Is there a particular advantage to that approach as opposed to just
using
"directory" mode for everything?
A gazillion files to deal with? Much easier to work with individual
custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.
That would require double the disk space.
But you can also just run pg_dump manually on each database and a
pg_dumpall -g like people are doing today -- I thought this whole thing was
about making it more convenient :)
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
Hi all,
With the help of Andrew and Dilip Kumar, I made a poc patch to dump all the
databases in archive format and then restore them using pg_restore.
Brief about the patch:
new option to pg_dumpall:
-F, --format=d|p (directory|plain) output file format (directory, plain
text (default))
Ex: ./pg_dumpall --format=directory --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude databases whose name matches PATTERN
When we give -g/--globals-only option, then only restore globals, no db
restoring.
*Design*:
When --format=directory is specified and there is no toc.dat file in the
main directory, then check
for global.dat and map.dat to restore all databases. If both files exist in
a directory,
then first restore all globals from global.dat and then restore all
databases one by one
from map.dat list.
While restoring, skip the databases that are given with exclude-database.
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
Please let me know feedback for the attached patch.
On Tue, 11 Jun 2024 at 01:06, Magnus Hagander <magnus@hagander.net> wrote:
On Mon, Jun 10, 2024 at 6:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <
nathandbossart@gmail.com>
wrote:
Is there a particular advantage to that approach as opposed to just
using
"directory" mode for everything?
A gazillion files to deal with? Much easier to work with individual
custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.That would require double the disk space.
But you can also just run pg_dump manually on each database and a
pg_dumpall -g like people are doing today -- I thought this whole thing was
about making it more convenient :)--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v01_poc_pg_dumpall_with_directory_31dec.patchapplication/octet-stream; name=v01_poc_pg_dumpall_with_directory_31dec.patchDownload
From 332530c17f2fb46af55791e7c7ee6393767a29eb Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 31 Dec 2024 09:58:20 -0800
Subject: [PATCH] pg_dumpall with directory format and restore it by pg_restore
new option to pg_dumpall:
-F, --format=d|p|directory|plain output file format (directory, plain text (default))
Ex: ./pg_dumpall --format=directory --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude databases whose name matches PATTERN
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=directory is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then firest restore all globals from global.dat and then restore all databases one by one
from map.dat list.
---
doc/src/sgml/ref/pg_dumpall.sgml | 30 ++
doc/src/sgml/ref/pg_restore.sgml | 30 ++
src/bin/pg_dump/pg_dumpall.c | 138 ++++++++--
src/bin/pg_dump/pg_restore.c | 571 ++++++++++++++++++++++++++++++++++++++-
4 files changed, 739 insertions(+), 30 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279..b6c9feb 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -582,6 +582,36 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases, then pass this as directory so that dump of all databases can be taken in separate subdirectory in archive format.
+by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ The archive is a directory archive.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ The archive is a plain archive.(by default also)
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1..ab2e035 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -316,6 +316,16 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
<listitem>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 9a04e51..a748962 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool directory_format);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -147,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,11 +192,13 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format;
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
bool roles_only = false;
bool tablespaces_only = false;
+ bool directory_format = false;
PGconn *conn;
int encoding;
const char *std_strings;
@@ -237,7 +243,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +271,17 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format = optarg;
+ if ((strcmp(format, "directory") == 0 || strcmp(format, "d") == 0))
+ directory_format = true;
+ else if (strcmp(format, "plain") != 0 || strcmp(format, "p") == 0)
+ {
+ pg_log_error("invalid format specified: %s", format);
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+ break;
case 'g':
globals_only = true;
break;
@@ -497,9 +513,31 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout.
*/
- if (filename)
+ if (directory_format)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory format is specified then we must provide the directory
+ * name.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +645,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, directory_format);
PQfinish(conn);
@@ -620,7 +658,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && !directory_format)
(void) fsync_fname(filename, false);
}
@@ -637,6 +675,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=d|p output file format (directory, plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1526,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool directory_format)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1546,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1554,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory format is specified then create a subdirectory under the
+ * main directory and each database dump file will be created under the
+ * subdirectory in archive mode as per single db pg_dump.
+ */
+ if (directory_format)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1585,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (directory_format)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1602,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1621,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if (!directory_format && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (!directory_format && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1644,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* close map file */
+ if (directory_format)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1657,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1666,26 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
- /*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
- */
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (dbfile)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+ appendPQExpBufferStr(&cmd, " -F d ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d..594ae27 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,45 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool _fileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers);
+static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
+ SimpleStringList *names);
+static PGconn *connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(const char *dumpdirpath,
+ SimpleStringList database_exclude_names, RestoreOptions *opts,
+ int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +95,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList database_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +149,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +178,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +205,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +327,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases those needs to skip while restoring */
+ simple_string_list_append(&database_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +358,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (database_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -406,6 +445,76 @@ main(int argc, char **argv)
}
}
+ /*
+ * If directory format, then first check that toc.dat file exist or not?
+ *
+ * if toc.dat exist, then no need to check for map.dat and global.dat
+ *
+ */
+ if (opts->format == archDirectory &&
+ inputFileSpec != NULL &&
+ !_fileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* if global.dat and map.dat are exist, then open them */
+ if (_fileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && _fileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ /* Found the global.dat and map.dat file so process. */
+ PGconn *conn = NULL;
+ SimpleStringList database_exclude_names = {NULL, NULL};
+
+ if (opts->cparams.dbname == NULL)
+ pg_fatal(" -d/--dbanme should be given if using dump of dumpall and global.dat");
+
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified if using dump of dumpall with global.dat");
+
+ /* Connect to database so that we can execute global.dat */
+ conn = connectDatabase(opts->cparams.dbname, NULL,
+ opts->cparams.pghost, opts->cparams.pgport, opts->cparams.username,
+ TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+
+ /* Open global.dat file and execute all the sql commands */
+ execute_global_sql_commands(conn, inputFileSpec);
+
+ /* if globals-only, then return from here */
+ if (globals_only)
+ {
+ PQfinish(conn); /* close the connection */
+ return 0;
+ }
+
+ /* Get a list of database names that match the exclude patterns */
+ expand_dbname_patterns(conn, &database_exclude_patterns,
+ &database_exclude_names);
+
+ /* Close the db connection as we are done with globals */
+ PQfinish(conn);
+
+ /* Now restore all the databases from map.dat file */
+ return restoreAllDatabases(inputFileSpec, database_exclude_names,
+ opts, numWorkers);
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -471,6 +580,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +593,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches PATTERN\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +732,455 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+static bool
+_fileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * Find a list of database names that match the given patterns.
+ * See also expand_table_name_patterns() in pg_dump.c
+ */
+static void
+expand_dbname_patterns(PGconn *conn,
+ SimpleStringList *patterns,
+ SimpleStringList *names)
+{
+ PQExpBuffer query;
+ PGresult *res;
+
+ if (patterns->head == NULL)
+ return; /* nothing to do */
+
+ query = createPQExpBuffer();
+
+ /*
+ * The loop below runs multiple SELECTs, which might sometimes result in
+ * duplicate entries in the name list, but we don't care, since all we're
+ * going to do is test membership of the list.
+ */
+
+ for (SimpleStringListCell *cell = patterns->head; cell; cell = cell->next)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query,
+ "SELECT datname FROM pg_catalog.pg_database n\n");
+ processSQLNamePattern(conn, query, cell->val, false,
+ false, NULL, "datname", NULL, NULL, NULL,
+ &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ cell->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+ for (int i = 0; i < PQntuples(res); i++)
+ {
+ simple_string_list_append(names, PQgetvalue(res, i, 0));
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ destroyPQExpBuffer(query);
+}
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/* ----------------
+ * ReadOneStatement()
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ * ----------------
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * This will restore databases those dumps are present in
+ * directory.
+ *
+ * This will process db one by one as names and dboid are mentioned in map.dat
+ * file.
+ */
+static int
+restoreAllDatabases(const char *dumpdirpath,
+ SimpleStringList database_exclude_names, RestoreOptions *opts,
+ int numWorkers)
+{
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int lineno = 0;
+ int exit_code = 0;
+ int processed_db = 0;
+ FILE *pfile;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /*
+ * Read one line from map.dat and extract dbname and dboid to restore it.
+ */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ Oid dboid;
+ char dbname[MAXPGPATH + 1];
+ int dbexit_code;
+
+ lineno++;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ /* Extract dbname and dboid from line */
+ sscanf(line, "%u %s" , &dboid, dbname);
+
+ pg_log_info("found dbname as :%s and dboid:%d in map.dat file while restoring", dbname, dboid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(dboid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", lineno);
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid);
+
+ /*
+ * Database -d/--dbname is already created so reset createDB to ignore
+ * database creation error.
+ */
+ if (strcmp(dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 0;
+
+ /* Skip any explicitly excluded database */
+ if (simple_string_list_member(&database_exclude_names, dbname))
+ {
+ pg_log_info("excluding database \"%s\"", dbname);
+ continue;
+ }
+
+ pg_log_info("restoring database \"%s\"", dbname);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ processed_db++;
+
+ /* Set createDB option to create new database. */
+ if (strcmp(dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 1;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", processed_db);
+
+ return exit_code;
+}
+
+/*
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ *
+ * semicolon is considered as statement terminator.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* now open global.dat file */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ case PGRES_COPY_IN:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ }
+}
--
1.8.3.1
Here, I am attaching an updated patch. I fixed some bugs of v01 patch and
did some code cleanup also.
TODO WIP 1: after excluding databases, we have paths of all the databases
that are needed to
restore so we can launch parallel workers for each database. I am studying
for this part.
TODO WIP 2: exclude-database=NAME, for pg_restore, I am using NAME as of
now, I will try to make it PATTERN. PATTERN
should be matched from map.dat file.
Please have a look over the patch and let me know feedback.
On Tue, 31 Dec 2024 at 23:53, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Hi all,
With the help of Andrew and Dilip Kumar, I made a poc patch to dump all
the databases in archive format and then restore them using pg_restore.Brief about the patch:
new option to pg_dumpall:
-F, --format=d|p (directory|plain) output file format (directory, plain
text (default))Ex: ./pg_dumpall --format=directory --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text
form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude databases whose name matches PATTERNWhen we give -g/--globals-only option, then only restore globals, no db
restoring.*Design*:
When --format=directory is specified and there is no toc.dat file in the
main directory, then check
for global.dat and map.dat to restore all databases. If both files exist
in a directory,
then first restore all globals from global.dat and then restore all
databases one by one
from map.dat list.
While restoring, skip the databases that are given with exclude-database.---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdirEx: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------Please let me know feedback for the attached patch.
On Tue, 11 Jun 2024 at 01:06, Magnus Hagander <magnus@hagander.net> wrote:
On Mon, Jun 10, 2024 at 6:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <
nathandbossart@gmail.com>
wrote:
Is there a particular advantage to that approach as opposed to just
using
"directory" mode for everything?
A gazillion files to deal with? Much easier to work with individual
custom
files if you're moving databases around and things like that.
Much easier to monitor eg sizes/dates if you're using it for backups.You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.That would require double the disk space.
But you can also just run pg_dump manually on each database and a
pg_dumpall -g like people are doing today -- I thought this whole thing was
about making it more convenient :)--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v02_poc_pg_dumpall_with_directory_2nd_jan.patchapplication/octet-stream; name=v02_poc_pg_dumpall_with_directory_2nd_jan.patchDownload
From 8de74fda7825301e3f10b10ce132751386ea5fb7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 1 Jan 2025 12:19:08 -0800
Subject: [PATCH] pg_dumpall with directory format and restore it by pg_restore
new option to pg_dumpall:
-F, --format=d|p|directory|plain output file format (directory, plain text (default))
Ex: ./pg_dumpall --format=directory --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=NAME exclude database whose name matches name
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=directory is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO: We can restore databases in parallel mode.
---
doc/src/sgml/ref/pg_dumpall.sgml | 30 ++
doc/src/sgml/ref/pg_restore.sgml | 30 ++
src/bin/pg_dump/pg_dumpall.c | 138 ++++++--
src/bin/pg_dump/pg_restore.c | 661 ++++++++++++++++++++++++++++++++++++++-
4 files changed, 829 insertions(+), 30 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279..b6c9feb 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -582,6 +582,36 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases, then pass this as directory so that dump of all databases can be taken in separate subdirectory in archive format.
+by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ The archive is a directory archive.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ The archive is a plain archive.(by default also)
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1..ab2e035 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -316,6 +316,16 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
<listitem>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f797..066197b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool directory_format);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -147,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,11 +192,13 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format;
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
bool roles_only = false;
bool tablespaces_only = false;
+ bool directory_format = false;
PGconn *conn;
int encoding;
const char *std_strings;
@@ -237,7 +243,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +271,17 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format = optarg;
+ if ((strcmp(format, "directory") == 0 || strcmp(format, "d") == 0))
+ directory_format = true;
+ else if (strcmp(format, "plain") != 0 || strcmp(format, "p") == 0)
+ {
+ pg_log_error("invalid format specified: %s", format);
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+ break;
case 'g':
globals_only = true;
break;
@@ -497,9 +513,31 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout.
*/
- if (filename)
+ if (directory_format)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory format is specified then we must provide the directory
+ * name.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +645,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, directory_format);
PQfinish(conn);
@@ -620,7 +658,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && !directory_format)
(void) fsync_fname(filename, false);
}
@@ -637,6 +675,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=d|p output file format (directory, plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1526,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool directory_format)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1546,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1554,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory format is specified then create a subdirectory under the
+ * main directory and each database dump file will be created under the
+ * subdirectory in archive mode as per single db pg_dump.
+ */
+ if (directory_format)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1585,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (directory_format)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1602,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1621,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if (!directory_format && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (!directory_format && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1644,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* close map file */
+ if (directory_format)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1657,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1666,26 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
- /*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
- */
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (dbfile)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+ appendPQExpBufferStr(&cmd, " -F d ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d..55d1862 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,65 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDBoidListCell
+{
+ struct SimpleDBoidListCell *next;
+ Oid dboid;
+ const char *dbname;
+} SimpleDBoidListCell;
+
+typedef struct SimpleActionList
+{
+ SimpleDBoidListCell *head;
+ SimpleDBoidListCell *tail;
+} SimpleDBoidList;
+
+static void
+simple_dboid_list_append(SimpleDBoidList *list, Oid dboid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool _fileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(const char *dumpdirpath,
+ SimpleStringList database_exclude_names, RestoreOptions *opts,
+ int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath);
+static int filter_dbnames_for_restore(SimpleDBoidList *dbname_oid_list,
+ SimpleStringList database_exclude_names);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDBoidList *dbname_oid_list);
+static void simple_dboid_list_append(SimpleDBoidList *list, Oid dboid,
+ const char *dbname);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +115,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList database_exclude_names = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +169,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +198,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +225,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +347,14 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases those needs to skip while restoring */
+ simple_string_list_append(&database_exclude_names, optarg);
+ /*
+ * XXX: TODO as of now, considering only db names but we can
+ * implement for patterns also.
+ */
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +382,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (database_exclude_names.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -406,6 +469,68 @@ main(int argc, char **argv)
}
}
+ /*
+ * If directory format, then first check that toc.dat file exist or not?
+ *
+ * if toc.dat exist, then no need to check for map.dat and global.dat
+ *
+ */
+ if (opts->format == archDirectory &&
+ inputFileSpec != NULL &&
+ !_fileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* if global.dat and map.dat are exist, then open them */
+ if (_fileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && _fileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ /* Found the global.dat and map.dat file so process. */
+ PGconn *conn = NULL;
+
+ if (opts->cparams.dbname == NULL)
+ pg_fatal(" -d/--dbanme should be given if using dump of dumpall and global.dat");
+
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified if using dump of dumpall with global.dat");
+
+ /* Connect to database so that we can execute global.dat */
+ conn = connectDatabase(opts->cparams.dbname, NULL,
+ opts->cparams.pghost, opts->cparams.pgport, opts->cparams.username,
+ TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+
+ /* Open global.dat file and execute all the sql commands */
+ execute_global_sql_commands(conn, inputFileSpec);
+
+ /* Close the db connection as we are done with globals */
+ PQfinish(conn);
+
+ /* if globals-only, then return from here */
+ if (globals_only)
+ return 0;
+
+ /* Now restore all the databases from map.dat file */
+ return restoreAllDatabases(inputFileSpec, database_exclude_names,
+ opts, numWorkers);
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -471,6 +596,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +609,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=NAME exclude databases whose name matches with name\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +748,529 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+static bool
+_fileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/* ----------------
+ * ReadOneStatement()
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ * ----------------
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(SimpleDBoidList *dbname_oid_list,
+ SimpleStringList database_exclude_names)
+{
+ int countdb = 0;
+ SimpleDBoidListCell *cell = dbname_oid_list->head;
+ SimpleDBoidListCell *precell = NULL;
+
+ /* Return 0 if there is no db to restore. */
+ if (cell == NULL)
+ return 0;
+
+ while (cell != NULL)
+ {
+ bool skip_db = false;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = database_exclude_names.head; celldb; celldb = celldb->next)
+ {
+ if (strcmp(celldb->val, cell->dbname) == 0)
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ */
+ skip_db = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (!skip_db)
+ {
+ countdb++;
+ precell = cell;
+ cell = cell->next;
+ }
+ else
+ {
+ if (precell != NULL)
+ {
+ precell->next = cell->next;
+ pfree(cell);
+ cell = precell->next;
+ }
+ else
+ {
+ dbname_oid_list->head = cell->next;
+ pfree(cell);
+ cell = dbname_oid_list->head;
+ }
+ }
+ }
+
+ return countdb;
+}
+
+/*
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding dboid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDBoidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /* Append all the dbname and dboid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid dboid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and dboid from line */
+ sscanf(line, "%u %s" , &dboid, dbname);
+ pg_log_info("found dbname as :%s and dboid:%d in map.dat file while restoring", dbname, dboid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(dboid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * TODO : before adding dbanme into list, we can verify that this db
+ * needs to skipped for restore or not.
+ */
+ simple_dboid_list_append(dbname_oid_list, dboid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(const char *dumpdirpath,
+ SimpleStringList database_exclude_names, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDBoidList dbname_oid_list = {NULL, NULL};
+ SimpleDBoidListCell *cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ /* Skip any explicitly excluded database. */
+ num_db_restore = filter_dbnames_for_restore(&dbname_oid_list, database_exclude_names);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ cell = dbname_oid_list.head;
+
+ while(cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, cell->dboid);
+
+ /*
+ * Database -d/--dbname is already created so reset createDB to ignore
+ * database creation error.
+ */
+ if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 0;
+
+ pg_log_info("restoring database \"%s\"", cell->dbname);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ /* Set createDB option to create new database. */
+ if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 1;
+
+ cell = cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ return exit_code;
+}
+
+/*
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ *
+ * semicolon is considered as statement terminator.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* now open global.dat file */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ case PGRES_COPY_IN:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * appends a node to the list in the end.
+ */
+static void
+simple_dboid_list_append(SimpleDBoidList *list, Oid dboid, const char *dbname)
+{
+ SimpleDBoidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDBoidListCell);
+
+ cell->next = NULL;
+ cell->dboid = dboid;
+ cell->dbname = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
--
1.8.3.1
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01 patch and
did some code cleanup also.
Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-world are
failing. Would you mind resolving those issues? Also, if you haven't
already, please add an entry to the next commitfest [0]https://commitfest.postgresql.org to ensure that 1)
this feature is tracked and 2) the automated tests will run.
+ if (dbfile)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+ appendPQExpBufferStr(&cmd, " -F d ");
+ }
Have you given any thought to allowing a directory of custom format files,
as discussed upthread [1]/messages/by-id/CABUevExoQ26jo+aQ9QZq+UMA1aD6gfpm9xBnh_t5e0DhaCeRYA@mail.gmail.com? Perhaps that is better handled as a follow-up
patch, but it'd be good to understand the plan, anyway.
[0]: https://commitfest.postgresql.org
[1]: /messages/by-id/CABUevExoQ26jo+aQ9QZq+UMA1aD6gfpm9xBnh_t5e0DhaCeRYA@mail.gmail.com
--
nathan
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com>
wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01 patch
and
did some code cleanup also.
Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-world are
failing. Would you mind resolving those issues? Also, if you haven't
already, please add an entry to the next commitfest [0] to ensure that 1)
this feature is tracked and 2) the automated tests will run.
Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the latest patch.
Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
<https://commitfest.postgresql.org/52/5495/>
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile, create_opts); + appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom format files,
as discussed upthread [1]? Perhaps that is better handled as a follow-up
patch, but it'd be good to understand the plan, anyway.
I will make these changes and will test. I will update my findings after
doing some testing.
Apart from these bugs, I added code to handle --exclude-database= PATTERN.
Earlier I was using NAME only to skip databases for restore.
*TODO: .pl test cases for new added options.*
Here, I am attaching an updated patch for review and feedback.
/messages/by-id/CABUevExoQ26jo+aQ9QZq+UMA1aD6gfpm9xBnh_t5e0DhaCeRYA@mail.gmail.com
--
nathan
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v03-pg_dumpall-with-directory-format-and-restore-08_jan.patchapplication/octet-stream; name=v03-pg_dumpall-with-directory-format-and-restore-08_jan.patchDownload
From 9e854f93197c230b82047dfd802c1b64cb3d2903 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 8 Jan 2025 00:15:54 +0530
Subject: [PATCH] pg_dumpall with directory format and restore it by pg_restore
new option to pg_dumpall:
-F, --format=d|p|directory|plain output file format (directory, plain text (default))
Ex: ./pg_dumpall --format=directory --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=directory is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 35 ++
doc/src/sgml/ref/pg_restore.sgml | 30 ++
src/bin/pg_dump/pg_dumpall.c | 150 ++++--
src/bin/pg_dump/pg_restore.c | 760 ++++++++++++++++++++++++++++++-
4 files changed, 945 insertions(+), 30 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279258..51deaae0d1 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -125,6 +125,41 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as directory so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ The archive is a directory archive.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ The archive is a plain archive.(by default also)
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719..ab2e035671 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -315,6 +315,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c..ceb4c908d8 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool directory_format);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -147,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,11 +192,13 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = NULL;
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
bool roles_only = false;
bool tablespaces_only = false;
+ bool directory_format = false;
PGconn *conn;
int encoding;
const char *std_strings;
@@ -237,7 +243,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +271,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = optarg;
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +422,26 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ if (formatName)
+ {
+ switch (formatName[0])
+ {
+ case 'd':
+ case 'D':
+ directory_format = true;
+ break;
+
+ case 'p':
+ case 'P':
+ /* Default plain format. */
+ break;
+
+ default:
+ pg_fatal("unrecognized dump format \"%s\"; please specify \"d\", or \"p\" ",
+ formatName);
+ }
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -497,9 +525,31 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout.
*/
- if (filename)
+ if (directory_format)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory format is specified then we must provide the directory
+ * name.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +657,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, directory_format);
PQfinish(conn);
@@ -620,7 +670,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && !directory_format)
(void) fsync_fname(filename, false);
}
@@ -637,6 +687,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=d|p output file format (directory, plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1538,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool directory_format)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1558,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1566,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory format is specified then create a subdirectory under the
+ * main directory and each database dump file will be created under the
+ * subdirectory in archive mode as per single db pg_dump.
+ */
+ if (directory_format)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1597,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (directory_format)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1614,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1633,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (!directory_format)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if (!directory_format && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, directory_format ? dbfilepath : NULL);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (!directory_format && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1656,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* close map file */
+ if (directory_format)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1669,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1678,26 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
- /*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
- */
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (dbfile)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+ appendPQExpBufferStr(&cmd, " -F d ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938..273f2002f1 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,69 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid dboid;
+ const char *dbname;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool _fileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_list_delete(SimpleStringList *list,
+ SimpleStringListCell *cell, SimpleStringListCell *prev);
+static void simple_dboid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +119,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +173,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +351,14 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ /*
+ * XXX: TODO as of now, considering only db names but we can
+ * implement for patterns also.
+ */
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +386,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -406,6 +473,69 @@ main(int argc, char **argv)
}
}
+ /*
+ * If directory format, then first check that toc.dat file exist or not?
+ *
+ * if toc.dat exist, then no need to check for map.dat and global.dat
+ *
+ */
+ if (opts->format == archDirectory &&
+ inputFileSpec != NULL &&
+ !_fileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* if global.dat and map.dat are exist, then open them */
+ if (_fileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && _fileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ /* Found the global.dat and map.dat file so process. */
+ PGconn *conn = NULL;
+
+ if (opts->cparams.dbname == NULL)
+ pg_fatal(" -d/--dbanme should be given if using dump of dumpall and global.dat");
+
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified if using dump of dumpall with global.dat");
+
+ /* Connect to database so that we can execute global.dat */
+ conn = connectDatabase(opts->cparams.dbname, NULL,
+ opts->cparams.pghost, opts->cparams.pgport, opts->cparams.username,
+ TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+
+ /* Open global.dat file and execute all the sql commands */
+ execute_global_sql_commands(conn, inputFileSpec);
+
+ /* if globals-only, then return from here */
+ if (globals_only)
+ {
+ PQfinish(conn);
+ return 0;
+ }
+
+ /* Now restore all the databases from map.dat file */
+ return restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts, numWorkers);
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -471,6 +601,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +614,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +753,623 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+static bool
+_fileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/* ----------------
+ * ReadOneStatement()
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ * ----------------
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int countdb = 0;
+ SimpleDatabaseOidListCell *cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *precell = NULL;
+
+ /* Return 0 if there is no db to restore. */
+ if (cell == NULL)
+ return 0;
+
+ while (cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleStringListCell *prev = NULL;
+ SimpleDatabaseOidListCell *next = cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if (is_full_pattern(conn, cell->dbname, celldb->val))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ */
+ skip_db_restore = true;
+
+ /*
+ * As this pattern is skipped, now delete entry from list to
+ * avoid multiple looping.
+ */
+ simple_string_list_delete(&db_exclude_patterns, celldb, prev);
+ break;
+ }
+
+ prev = celldb;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ simple_dboid_list_delete(dbname_oid_list, cell, precell);
+ else
+ {
+ countdb++; /* Increment db couter. */
+ precell = cell;
+ }
+
+ cell = next; /* Process next dbname from dbname list. */
+ }
+
+ return countdb;
+}
+
+/*
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding dboid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /* Append all the dbname and dboid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid dboid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and dboid from line */
+ sscanf(line, "%u %s" , &dboid, dbname);
+
+ pg_log_info("found dbname as :%s and dboid:%d in map.dat file while restoring", dbname, dboid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(dboid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbanme into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_dboid_list_append(dbname_oid_list, dboid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ /* Skip any explicitly excluded database. */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ cell = dbname_oid_list.head;
+
+ while(cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, cell->dboid);
+
+ /*
+ * Database -d/--dbname is already created so reset createDB to ignore
+ * database creation error.
+ */
+ if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 0;
+
+ pg_log_info("restoring database \"%s\"", cell->dbname);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ /* Set createDB option to create new database. */
+ if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ opts->createDB = 1;
+
+ cell = cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ return exit_code;
+}
+
+/*
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ *
+ * semicolon is considered as statement terminator.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* now open global.dat file */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ case PGRES_COPY_IN:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * appends a node to the list in the end.
+ */
+static void
+simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->dboid = dboid;
+ cell->dbname = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * delete cell from string list.
+ */
+static void
+simple_string_list_delete(SimpleStringList *list, SimpleStringListCell *cell,
+ SimpleStringListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * delete cell from database and oid list.
+ */
+static void
+simple_dboid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * Returns true if we can constuct 1st string from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (strcmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
--
2.39.3
Hi all,
On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com>
wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01 patch
and
did some code cleanup also.
Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-world are
failing. Would you mind resolving those issues? Also, if you haven't
already, please add an entry to the next commitfest [0] to ensure that
1)
this feature is tracked and 2) the automated tests will run.
Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the latest
patch. Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile, create_opts); + appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom format
files,
as discussed upthread [1]? Perhaps that is better handled as a
follow-up
patch, but it'd be good to understand the plan, anyway.
I will make these changes and will test. I will update my findings after
doing some testing.
In the latest patch, I added dump and restoring for
directory/custom/tar/plain formats. Please consider this patch for review
and testing.
*Design*:
When we give --format=d|c|t then we are dumping all global sql commands in
global.dat in plain sql format and we are making a map.dat file with dbname
and dboid. For each database, we are making separate subdirectory with
dboid under databases directory and dumping as per archive format(d|c|t).
While restoring, first we are restoring all global sql commands from
global.dat and then we are restoring one by one all databases. As we are
supporting --exclude-database with pg_dumpall, the same we are supporting
with pg_restore also to skip restoring on some specified database patterns.
If we want to restore a single database, then we can specided particular
subdirectory from the databases folder. To get file name, we refer
dbname into map.file.
*TODO*: Now I will work on test cases for these new added options to the
pg_dumpall and pg_restore.
Here, I am attaching the v04 patch for testing and review.
Apart from these bugs, I added code to handle --exclude-database=
PATTERN. Earlier I was using NAME only to skip databases for restore.
TODO: .pl test cases for new added options.
Here, I am attaching an updated patch for review and feedback.
/messages/by-id/CABUevExoQ26jo+aQ9QZq+UMA1aD6gfpm9xBnh_t5e0DhaCeRYA@mail.gmail.com
--
nathan--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v04-pg_dumpall-with-directory-format-and-restore-08_jan.patchapplication/octet-stream; name=v04-pg_dumpall-with-directory-format-and-restore-08_jan.patchDownload
From ca24f775d1222cf877b5f084c49a940c04da6ce2 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 8 Jan 2025 19:50:54 +0530
Subject: [PATCH] pg_dumpall with directory/tar/custom format and restore it by
pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format (directory, plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 36 +++++-
src/bin/pg_dump/pg_dumpall.c | 103 +++++++++------
src/bin/pg_dump/pg_restore.c | 213 +++++++++++++++++--------------
3 files changed, 214 insertions(+), 138 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 51deaae0d1..77a8d6a0c5 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -141,7 +141,12 @@ PostgreSQL documentation
<term><literal>directory</literal></term>
<listitem>
<para>
- The archive is a directory archive.
+ Output a directory-format archive suitable for input into pg_restore. This will create a directory
+ with one file for each table and large object being dumped, plus a so-called Table of Contents
+ file describing the dumped objects in a machine-readable format that pg_restore can read. A
+ directory format archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed
+ by default using gzip and also supports parallel dumps.
</para>
</listitem>
</varlistentry>
@@ -151,10 +156,37 @@ PostgreSQL documentation
<term><literal>plain</literal></term>
<listitem>
<para>
- The archive is a plain archive.(by default also)
+ Output a plain-text SQL script file (the default).
</para>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
</listitem>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index ceb4c908d8..c26a14c617 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -66,10 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn, bool directory_format);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
static int runPgDump(const char *dbname, const char *create_opts,
- char *dbfile);
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -84,6 +84,7 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -192,13 +193,13 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
- const char *formatName = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
bool roles_only = false;
bool tablespaces_only = false;
- bool directory_format = false;
PGconn *conn;
int encoding;
const char *std_strings;
@@ -272,7 +273,7 @@ main(int argc, char *argv[])
appendShellString(pgdumpopts, filename);
break;
case 'F':
- formatName = optarg;
+ formatName = pg_strdup(optarg);
break;
case 'g':
globals_only = true;
@@ -422,25 +423,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (formatName)
- {
- switch (formatName[0])
- {
- case 'd':
- case 'D':
- directory_format = true;
- break;
-
- case 'p':
- case 'P':
- /* Default plain format. */
- break;
-
- default:
- pg_fatal("unrecognized dump format \"%s\"; please specify \"d\", or \"p\" ",
- formatName);
- }
- }
+ archDumpFormat = parseDumpFormat(formatName);
/*
* If password values are not required in the dump, switch to using
@@ -527,7 +510,7 @@ main(int argc, char *argv[])
/*
* Open the output file if required, otherwise use stdout.
*/
- if (directory_format)
+ if (archDumpFormat != archNull)
{
char toc_path[MAXPGPATH];
@@ -657,7 +640,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn, directory_format);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -670,7 +653,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync && !directory_format)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -1538,7 +1521,7 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn, bool directory_format)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
@@ -1571,7 +1554,7 @@ dumpDatabases(PGconn *conn, bool directory_format)
* main directory and each database dump file will be created under the
* subdirectory in archive mode as per single db pg_dump.
*/
- if (directory_format)
+ if (archDumpFormat != archNull)
{
char map_file_path[MAXPGPATH];
@@ -1597,7 +1580,7 @@ dumpDatabases(PGconn *conn, bool directory_format)
if (strcmp(dbname, "template0") == 0)
continue;
- if (directory_format)
+ if (archDumpFormat != archNull)
{
snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
@@ -1614,7 +1597,7 @@ dumpDatabases(PGconn *conn, bool directory_format)
pg_log_info("dumping database \"%s\"", dbname);
- if (!directory_format)
+ if (archDumpFormat == archNull)
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
@@ -1633,21 +1616,21 @@ dumpDatabases(PGconn *conn, bool directory_format)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- if (!directory_format)
+ if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (!directory_format && filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts, directory_format ? dbfilepath : NULL);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (!directory_format && filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1656,8 +1639,8 @@ dumpDatabases(PGconn *conn, bool directory_format)
}
}
- /* close map file */
- if (directory_format)
+ /* Close map file */
+ if (archDumpFormat != archNull)
fclose(map_file);
PQclear(res);
@@ -1669,7 +1652,8 @@ dumpDatabases(PGconn *conn, bool directory_format)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts, char *dbfile)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1678,11 +1662,23 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- if (dbfile)
+ /*
+ * If this is not a plain dump, then append file name and dump format to
+ * the pg_dump command.
+ */
+ if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
dbfile, create_opts);
- appendPQExpBufferStr(&cmd, " -F d ");
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
+ else
+ pg_fatal("invalid dump format %d, specified, please use d/c/p only", archDumpFormat);
}
else
{
@@ -2092,3 +2088,30 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("invalid dump format \"%s\" specified", format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 273f2002f1..e6a70adea9 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -59,9 +59,9 @@
typedef struct SimpleDatabaseOidListCell
{
- struct SimpleDatabaseOidListCell *next;
- Oid dboid;
- const char *dbname;
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
} SimpleDatabaseOidListCell;
typedef struct SimpleDatabaseOidList
@@ -71,11 +71,11 @@ typedef struct SimpleDatabaseOidList
} SimpleDatabaseOidList;
static void
-simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid, const char *dbname);
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
-static bool _fileExistsInDirectory(const char *dir, const char *filename);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
static bool restoreOneDatabase(const char *inputFileSpec,
RestoreOptions *opts, int numWorkers);
static PGconn *connectDatabase(const char *dbname, const char *conn_string,
@@ -90,12 +90,12 @@ static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list);
-static void simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid,
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
const char *dbname);
static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
static void simple_string_list_delete(SimpleStringList *list,
SimpleStringListCell *cell, SimpleStringListCell *prev);
-static void simple_dboid_list_delete(SimpleDatabaseOidList *list,
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
@@ -352,12 +352,8 @@ main(int argc, char **argv)
opts->exit_on_error = true;
break;
case 6:
- /* list of databases those needs to skip while restoring */
+ /* list of databases patterns those needs to skip while restoring */
simple_string_list_append(&db_exclude_patterns, optarg);
- /*
- * XXX: TODO as of now, considering only db names but we can
- * implement for patterns also.
- */
break;
default:
@@ -474,49 +470,53 @@ main(int argc, char **argv)
}
/*
- * If directory format, then first check that toc.dat file exist or not?
- *
- * if toc.dat exist, then no need to check for map.dat and global.dat
- *
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
*/
- if (opts->format == archDirectory &&
- inputFileSpec != NULL &&
- !_fileExistsInDirectory(inputFileSpec, "toc.dat"))
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
{
- /* if global.dat and map.dat are exist, then open them */
- if (_fileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
- && _fileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
{
- /* Found the global.dat and map.dat file so process. */
- PGconn *conn = NULL;
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ /*
+ * If we are restoring dump of multiple databases, then connection
+ * should be given.
+ */
if (opts->cparams.dbname == NULL)
- pg_fatal(" -d/--dbanme should be given if using dump of dumpall and global.dat");
+ pg_fatal(" -d/--dbanme should be given when using archive dump of pg_dumpall");
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
if (opts->createDB != 1)
- pg_fatal("option -C/--create should be specified if using dump of dumpall with global.dat");
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
- /* Connect to database so that we can execute global.dat */
- conn = connectDatabase(opts->cparams.dbname, NULL,
- opts->cparams.pghost, opts->cparams.pgport, opts->cparams.username,
- TRI_DEFAULT, false);
+ /* Connect to database to execute global sql commands from global.dat file. */
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
if (!conn)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
- /* Open global.dat file and execute all the sql commands */
+ /* Open global.dat file and execute all the sql commands. */
execute_global_sql_commands(conn, inputFileSpec);
- /* if globals-only, then return from here */
+ /* If globals-only, then return from here. */
if (globals_only)
{
PQfinish(conn);
return 0;
}
- /* Now restore all the databases from map.dat file */
- return restoreAllDatabases(conn, inputFileSpec,
- db_exclude_patterns,
+ /* Now restore all the databases from map.dat file. */
+ return restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
opts, numWorkers);
}/* end if */
}/* end if */
@@ -754,11 +754,16 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
static bool
-_fileExistsInDirectory(const char *dir, const char *filename)
+IsFileExistsInDirectory(const char *dir, const char *filename)
{
- struct stat st;
- char buf[MAXPGPATH];
+ struct stat st;
+ char buf[MAXPGPATH];
if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
pg_fatal("directory name too long: \"%s\"", dir);
@@ -767,27 +772,28 @@ _fileExistsInDirectory(const char *dir, const char *filename)
}
/*
+ * connectDatabase
+ *
* Make a database connection with the given parameters. An
* interactive password prompt is automatically issued if required.
*
* If fail_on_error is false, we return NULL without printing any message
* on failure, but preserve any prompted password for the next try.
- *
*/
static PGconn *
connectDatabase(const char *dbname, const char *connection_string,
const char *pghost, const char *pgport, const char *pguser,
trivalue prompt_password, bool fail_on_error)
{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
- static int server_version;
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
if (prompt_password == TRI_YES && !password)
password = simple_prompt("Password: ", false);
@@ -798,10 +804,10 @@ connectDatabase(const char *dbname, const char *connection_string,
*/
do
{
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
free(keywords);
free(values);
@@ -949,12 +955,14 @@ connectDatabase(const char *dbname, const char *connection_string,
}
/*
+ * executeQuery
+ *
* Run a query, return the results, exit program on failure.
*/
static PGresult *
executeQuery(PGconn *conn, const char *query)
{
- PGresult *res;
+ PGresult *res;
pg_log_info("executing %s", query);
@@ -970,20 +978,19 @@ executeQuery(PGconn *conn, const char *query)
return res;
}
-/* ----------------
- * ReadOneStatement()
+/*
+ * ReadOneStatement
*
* This will start reading from passed file pointer using fgetc and read till
* semicolon(sql statement terminator for global.sql file)
*
* EOF is returned if end-of-file input is seen; time to shut down.
- * ----------------
*/
static int
ReadOneStatement(StringInfo inBuf, FILE *f_glo)
{
- int c; /* character read from getc() */
+ int c; /* character read from getc() */
resetStringInfo(inBuf);
@@ -1015,6 +1022,8 @@ ReadOneStatement(StringInfo inBuf, FILE *f_glo)
}
/*
+ * filter_dbnames_for_restore
+ *
* This will remove names from all dblist that are given with exclude-database
* option.
*
@@ -1024,24 +1033,25 @@ static int
filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
SimpleStringList db_exclude_patterns)
{
- int countdb = 0;
- SimpleDatabaseOidListCell *cell = dbname_oid_list->head;
- SimpleDatabaseOidListCell *precell = NULL;
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
/* Return 0 if there is no db to restore. */
- if (cell == NULL)
+ if (dboid_cell == NULL)
return 0;
- while (cell != NULL)
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
{
- bool skip_db_restore = false;
- SimpleStringListCell *prev = NULL;
- SimpleDatabaseOidListCell *next = cell->next;
+ bool skip_db_restore = false;
+ SimpleStringListCell *prev = NULL;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
/* Now match this dbname with exclude-database list. */
for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
{
- if (is_full_pattern(conn, cell->dbname, celldb->val))
+ if (is_full_pattern(conn, dboid_cell->db_name, celldb->val))
{
/*
* As we need to skip this dbname so set flag to remove it from
@@ -1062,22 +1072,25 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
/* Increment count if db needs to be restored. */
if (skip_db_restore)
- simple_dboid_list_delete(dbname_oid_list, cell, precell);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
else
{
- countdb++; /* Increment db couter. */
- precell = cell;
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
}
- cell = next; /* Process next dbname from dbname list. */
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
}
- return countdb;
+ return count_db;
}
/*
+ * get_dbname_oid_list_from_mfile
+ *
* Open map.dat file and read line by line and then prepare a list of database
- * names and correspoding dboid.
+ * names and correspoding db_oid.
*
* Returns, total number of database names in map.dat file.
*/
@@ -1097,19 +1110,19 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
if (pfile == NULL)
pg_fatal("could not open map.dat file: %s", strerror(errno));
- /* Append all the dbname and dboid to the list. */
+ /* Append all the dbname and db_oid to the list. */
while((fgets(line, MAXPGPATH, pfile)) != NULL)
{
- Oid dboid;
+ Oid db_oid;
char dbname[MAXPGPATH + 1];
- /* Extract dbname and dboid from line */
- sscanf(line, "%u %s" , &dboid, dbname);
+ /* Extract dbname and db_oid from line */
+ sscanf(line, "%u %s" , &db_oid, dbname);
- pg_log_info("found dbname as :%s and dboid:%d in map.dat file while restoring", dbname, dboid);
+ pg_log_info("found dbname as :%s and db_oid:%d in map.dat file while restoring", dbname, db_oid);
/* Report error if file has any corrupted data. */
- if (!OidIsValid(dboid) || strlen(dbname) == 0)
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
/*
@@ -1117,7 +1130,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
* needs to skipped for restore or not but as of now, we are making
* a list of all the databases.
*/
- simple_dboid_list_append(dbname_oid_list, dboid, dbname);
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
count++;
}
@@ -1128,6 +1141,8 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
}
/*
+ * restoreAllDatabases
+ *
* This will restore databases those dumps are present in
* directory based on map.dat file mapping.
*
@@ -1140,7 +1155,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
int numWorkers)
{
SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
- SimpleDatabaseOidListCell *cell;
+ SimpleDatabaseOidListCell *dboid_cell;
int exit_code = 0;
int num_db_restore;
int num_total_db;
@@ -1171,9 +1186,9 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* after skipping names of exclude-database. Now we can launch parallel
* workers to restore these databases.
*/
- cell = dbname_oid_list.head;
+ dboid_cell = dbname_oid_list.head;
- while(cell != NULL)
+ while(dboid_cell != NULL)
{
char subdirpath[MAXPGPATH];
int dbexit_code;
@@ -1188,16 +1203,16 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
opts->cparams.override_dbname = NULL;
}
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, cell->dboid);
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
/*
* Database -d/--dbname is already created so reset createDB to ignore
* database creation error.
*/
- if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ if (pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
opts->createDB = 0;
- pg_log_info("restoring database \"%s\"", cell->dbname);
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers);
@@ -1206,10 +1221,10 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
exit_code = dbexit_code;
/* Set createDB option to create new database. */
- if (strcmp(cell->dbname, opts->cparams.dbname) == 0)
+ if (pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
opts->createDB = 1;
- cell = cell->next;
+ dboid_cell = dboid_cell->next;
} /* end while */
/* Log number of processed databases.*/
@@ -1219,10 +1234,11 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
}
/*
+ * execute_global_sql_commands
+ *
* This will open global.dat file and will execute all global sql commands one
* by one statement.
- *
- * semicolon is considered as statement terminator.
+ * Semicolon is considered as statement terminator.
*/
static void
execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
@@ -1253,7 +1269,6 @@ execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
case PGRES_COMMAND_OK:
case PGRES_TUPLES_OK:
case PGRES_EMPTY_QUERY:
- case PGRES_COPY_IN:
break;
default:
pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
@@ -1265,18 +1280,20 @@ execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
}
/*
+ * simple_db_oid_list_append
+ *
* appends a node to the list in the end.
*/
static void
-simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid, const char *dbname)
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
{
SimpleDatabaseOidListCell *cell;
cell = pg_malloc_object(SimpleDatabaseOidListCell);
cell->next = NULL;
- cell->dboid = dboid;
- cell->dbname = pg_strdup(dbname);
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
if (list->tail)
list->tail->next = cell;
@@ -1286,6 +1303,8 @@ simple_dboid_list_append(SimpleDatabaseOidList *list, Oid dboid, const char *dbn
}
/*
+ * simple_string_list_delete
+ *
* delete cell from string list.
*/
static void
@@ -1305,10 +1324,12 @@ simple_string_list_delete(SimpleStringList *list, SimpleStringListCell *cell,
}
/*
+ * simple_db_oid_list_delete
+ *
* delete cell from database and oid list.
*/
static void
-simple_dboid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
SimpleDatabaseOidListCell *prev)
{
if (prev == NULL)
@@ -1348,7 +1369,7 @@ is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
{
if (PQntuples(result) == 1)
{
- const char *outstr;
+ const char *outstr;
outstr = PQgetvalue(result, 0, 0);
@@ -1359,7 +1380,7 @@ is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
* If output string of substring function is matches with str, then
* we can construct str from pattern.
*/
- if (strcmp(outstr, str) == 0)
+ if (pg_strcasecmp(outstr, str) == 0)
return true;
else
return false;
--
2.39.3
On Wed, 8 Jan 2025 at 20:07, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Hi all,
On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com> wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01 patch and
did some code cleanup also.Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-world are
failing. Would you mind resolving those issues? Also, if you haven't
already, please add an entry to the next commitfest [0] to ensure that 1)
this feature is tracked and 2) the automated tests will run.Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the latest patch. Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile, create_opts); + appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom format files,
as discussed upthread [1]? Perhaps that is better handled as a follow-up
patch, but it'd be good to understand the plan, anyway.I will make these changes and will test. I will update my findings after doing some testing.
In the latest patch, I added dump and restoring for directory/custom/tar/plain formats. Please consider this patch for review and testing.
Design:
When we give --format=d|c|t then we are dumping all global sql commands in global.dat in plain sql format and we are making a map.dat file with dbname and dboid. For each database, we are making separate subdirectory with dboid under databases directory and dumping as per archive format(d|c|t).
While restoring, first we are restoring all global sql commands from global.dat and then we are restoring one by one all databases. As we are supporting --exclude-database with pg_dumpall, the same we are supporting with pg_restore also to skip restoring on some specified database patterns.
If we want to restore a single database, then we can specided particular subdirectory from the databases folder. To get file name, we refer dbname into map.file.TODO: Now I will work on test cases for these new added options to the pg_dumpall and pg_restore.
Here, I am attaching the v04 patch for testing and review.
Sorry. My mistake.
v04 was the delta patch on the top of v03.
Here, I am attaching the v05 patch for testing and review.
Apart from these bugs, I added code to handle --exclude-database= PATTERN. Earlier I was using NAME only to skip databases for restore.
TODO: .pl test cases for new added options.
Here, I am attaching an updated patch for review and feedback.
[0] https://commitfest.postgresql.org
[1] /messages/by-id/CABUevExoQ26jo+aQ9QZq+UMA1aD6gfpm9xBnh_t5e0DhaCeRYA@mail.gmail.com--
nathan--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v05_pg_dumpall-with-directory-tar-custom-format-08-jan.patchapplication/octet-stream; name=v05_pg_dumpall-with-directory-tar-custom-format-08-jan.patchDownload
From ac608a750fa559492b961c7dcd2f4b1fe47a7d3e Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 8 Jan 2025 22:05:37 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: test cases for new added options.
TODO2: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 67 +++
doc/src/sgml/ref/pg_restore.sgml | 30 ++
src/bin/pg_dump/pg_dumpall.c | 170 ++++++-
src/bin/pg_dump/pg_restore.c | 781 ++++++++++++++++++++++++++++++-
4 files changed, 1020 insertions(+), 28 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279258..77a8d6a0c5 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -125,6 +125,73 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as directory so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. This will create a directory
+ with one file for each table and large object being dumped, plus a so-called Table of Contents
+ file describing the dumped objects in a machine-readable format that pg_restore can read. A
+ directory format archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed
+ by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719..ab2e035671 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -315,6 +315,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c..b30dbd8c3b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -81,6 +84,7 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -147,6 +151,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +193,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +244,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +272,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +423,8 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ archDumpFormat = parseDumpFormat(formatName);
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -497,9 +508,31 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout.
*/
- if (filename)
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory format is specified then we must provide the directory
+ * name.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +640,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +653,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +670,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1522,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1542,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1550,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory format is specified then create a subdirectory under the
+ * main directory and each database dump file will be created under the
+ * subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1581,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1598,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1617,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1640,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1653,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1663,38 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain dump, then append file name and dump format to
+ * the pg_dump command.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
+ else
+ pg_fatal("invalid dump format %d, specified, please use d/c/p only", archDumpFormat);
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1994,3 +2089,30 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("invalid dump format \"%s\" specified", format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938..e6a70adea9 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,69 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_list_delete(SimpleStringList *list,
+ SimpleStringListCell *cell, SimpleStringListCell *prev);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +119,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +173,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +351,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +382,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -406,6 +469,73 @@ main(int argc, char **argv)
}
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * If we are restoring dump of multiple databases, then connection
+ * should be given.
+ */
+ if (opts->cparams.dbname == NULL)
+ pg_fatal(" -d/--dbanme should be given when using archive dump of pg_dumpall");
+
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
+
+ /* Connect to database to execute global sql commands from global.dat file. */
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+
+ /* Open global.dat file and execute all the sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ PQfinish(conn);
+ return 0;
+ }
+
+ /* Now restore all the databases from map.dat file. */
+ return restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -471,6 +601,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +614,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +753,644 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleStringListCell *prev = NULL;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if (is_full_pattern(conn, dboid_cell->db_name, celldb->val))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ */
+ skip_db_restore = true;
+
+ /*
+ * As this pattern is skipped, now delete entry from list to
+ * avoid multiple looping.
+ */
+ simple_string_list_delete(&db_exclude_patterns, celldb, prev);
+ break;
+ }
+
+ prev = celldb;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and db_oid from line */
+ sscanf(line, "%u %s" , &db_oid, dbname);
+
+ pg_log_info("found dbname as :%s and db_oid:%d in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbanme into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ /* Skip any explicitly excluded database. */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ /*
+ * Database -d/--dbname is already created so reset createDB to ignore
+ * database creation error.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
+ opts->createDB = 0;
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ /* Set createDB option to create new database. */
+ if (pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
+ opts->createDB = 1;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* now open global.dat file */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_string_list_delete
+ *
+ * delete cell from string list.
+ */
+static void
+simple_string_list_delete(SimpleStringList *list, SimpleStringListCell *cell,
+ SimpleStringListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * Returns true if we can constuct 1st string from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
--
2.39.3
Hi,
Le mer. 8 janv. 2025 à 17:41, Mahendra Singh Thalor <mahi6run@gmail.com> a
écrit :
On Wed, 8 Jan 2025 at 20:07, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:Hi all,
On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com>
wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor
wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01
patch and
did some code cleanup also.
Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-worldare
failing. Would you mind resolving those issues? Also, if you
haven't
already, please add an entry to the next commitfest [0] to ensure
that 1)
this feature is tracked and 2) the automated tests will run.
Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the latest
patch. Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile,create_opts);
+ appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom format
files,
as discussed upthread [1]? Perhaps that is better handled as a
follow-up
patch, but it'd be good to understand the plan, anyway.
I will make these changes and will test. I will update my findings
after doing some testing.
In the latest patch, I added dump and restoring for
directory/custom/tar/plain formats. Please consider this patch for review
and testing.Design:
When we give --format=d|c|t then we are dumping all global sql commandsin global.dat in plain sql format and we are making a map.dat file with
dbname and dboid. For each database, we are making separate subdirectory
with dboid under databases directory and dumping as per archive
format(d|c|t).While restoring, first we are restoring all global sql commands from
global.dat and then we are restoring one by one all databases. As we are
supporting --exclude-database with pg_dumpall, the same we are supporting
with pg_restore also to skip restoring on some specified database patterns.If we want to restore a single database, then we can specided particular
subdirectory from the databases folder. To get file name, we refer dbname
into map.file.TODO: Now I will work on test cases for these new added options to the
pg_dumpall and pg_restore.
Here, I am attaching the v04 patch for testing and review.
Sorry. My mistake.
v04 was the delta patch on the top of v03.Here, I am attaching the v05 patch for testing and review.
Just FWIW, I did a quick test tonight. It applies cleanly, compiles OK. I
did a dump:
$ pg_dumpall -Fd -f dir
and then a restore (after dropping the databases I had):
$ pg_restore -Cd postgres -v dir
It worked really well. That's great.
Quick thing to fix: you've got this error message:
pg_restore: error: -d/--dbanme should be given when using archive dump of
pg_dumpall
I guess it is --dbname, rather than --dbanme.
Of course, it needs much more testing, but this feature would be great to
have. Thanks for working on this!
--
Guillaume.
On Thu, 9 Jan 2025 at 02:30, Guillaume Lelarge <guillaume@lelarge.info> wrote:
Hi,
Le mer. 8 janv. 2025 à 17:41, Mahendra Singh Thalor <mahi6run@gmail.com> a écrit :
On Wed, 8 Jan 2025 at 20:07, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Hi all,
On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com> wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
Here, I am attaching an updated patch. I fixed some bugs of v01 patch and
did some code cleanup also.Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in check-world are
failing. Would you mind resolving those issues? Also, if you haven't
already, please add an entry to the next commitfest [0] to ensure that 1)
this feature is tracked and 2) the automated tests will run.Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the latest patch. Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile, create_opts); + appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom format files,
as discussed upthread [1]? Perhaps that is better handled as a follow-up
patch, but it'd be good to understand the plan, anyway.I will make these changes and will test. I will update my findings after doing some testing.
In the latest patch, I added dump and restoring for directory/custom/tar/plain formats. Please consider this patch for review and testing.
Design:
When we give --format=d|c|t then we are dumping all global sql commands in global.dat in plain sql format and we are making a map.dat file with dbname and dboid. For each database, we are making separate subdirectory with dboid under databases directory and dumping as per archive format(d|c|t).
While restoring, first we are restoring all global sql commands from global.dat and then we are restoring one by one all databases. As we are supporting --exclude-database with pg_dumpall, the same we are supporting with pg_restore also to skip restoring on some specified database patterns.
If we want to restore a single database, then we can specided particular subdirectory from the databases folder. To get file name, we refer dbname into map.file.TODO: Now I will work on test cases for these new added options to the pg_dumpall and pg_restore.
Here, I am attaching the v04 patch for testing and review.
Sorry. My mistake.
v04 was the delta patch on the top of v03.Here, I am attaching the v05 patch for testing and review.
Just FWIW, I did a quick test tonight. It applies cleanly, compiles OK. I did a dump:
Thanks for testing and review.
$ pg_dumpall -Fd -f dir
and then a restore (after dropping the databases I had):
$ pg_restore -Cd postgres -v dir
It worked really well. That's great.
Quick thing to fix: you've got this error message:
pg_restore: error: -d/--dbanme should be given when using archive dump of pg_dumpallI guess it is --dbname, rather than --dbanme.
Fixed.
Of course, it needs much more testing, but this feature would be great to have. Thanks for working on this!
Apart from above typo, I fixed some review comments those I received
from Andrew in offline discussion. Thanks Andrew for the quick review.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v06_pg_dumpall-with-directory-tar-custom-format-08-jan.patchapplication/octet-stream; name=v06_pg_dumpall-with-directory-tar-custom-format-08-jan.patchDownload
From 4863c4c1920290752fff9e217bc65a1dfebeb2f7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 9 Jan 2025 08:04:05 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: test cases for new added options.
TODO2: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 67 ++
doc/src/sgml/ref/pg_restore.sgml | 30 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 171 +++++-
src/bin/pg_dump/pg_restore.c | 875 ++++++++++++++++++++++++++-
8 files changed, 1124 insertions(+), 42 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279258..77a8d6a0c5 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -125,6 +125,73 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as directory so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. This will create a directory
+ with one file for each table and large object being dumped, plus a so-called Table of Contents
+ file describing the dumped objects in a machine-readable format that pg_restore can read. A
+ directory format archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed
+ by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719..ab2e035671 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -315,6 +315,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b2..65000e5a08 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844..7153d4a40b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd..d94d0de2a5 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df95..eae626f621 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c..a2b35e3afb 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -81,6 +84,7 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -147,6 +151,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +193,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +244,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +272,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +423,8 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ archDumpFormat = parseDumpFormat(formatName);
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -497,9 +508,32 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new files with global.dat and map.dat names.
*/
- if (filename)
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory/tar/custom format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +641,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +654,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +671,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1523,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1543,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1551,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory be
+ * created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1582,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1599,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1618,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1641,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1654,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1664,38 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain dump, then append file name and dump format to
+ * the pg_dump command.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
+ else
+ pg_fatal("invalid dump format %d, specified, please use d/c/p only", archDumpFormat);
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1994,3 +2090,30 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("invalid dump format \"%s\" specified", format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938..8ac0587e49 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,70 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +120,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +174,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +203,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +230,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +352,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +383,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -395,17 +459,92 @@ main(int argc, char **argv)
opts->format = archDirectory;
break;
+ case 'p':
+ case 'P':
+ break; /* default format */
+
case 't':
case 'T':
opts->format = archTar;
break;
default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
opts->formatName);
}
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
+
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -431,11 +570,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +610,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +623,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +762,723 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and db_oid from line */
+ sscanf(line, "%u %s" , &db_oid, dbname);
+
+ pg_log_info("found dbname as :%s and db_oid:%d in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ /*
+ * Database -d/--dbname is already created so reset createDB to ignore
+ * database creation error.
+ */
+ if (opts->cparams.dbname &&
+ pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
+ opts->createDB = 0;
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ /* Set createDB option to create new database. */
+ if (opts->cparams.dbname &&
+ pg_strcasecmp(dboid_cell->db_name, opts->cparams.dbname) == 0)
+ opts->createDB = 1;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+
+ ofile = fopen(out_file_path, "w");
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: %s", strerror(errno));
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * Returns true if we can constuct 1st string from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
--
2.39.3
On Thu, 9 Jan 2025 at 08:11, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Thu, 9 Jan 2025 at 02:30, Guillaume Lelarge <guillaume@lelarge.info>
wrote:
Hi,
Le mer. 8 janv. 2025 à 17:41, Mahendra Singh Thalor <mahi6run@gmail.com>
a écrit :
On Wed, 8 Jan 2025 at 20:07, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Hi all,
On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <
nathandbossart@gmail.com> wrote:
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor
wrote:
Here, I am attaching an updated patch. I fixed some bugs of
v01 patch and
did some code cleanup also.
Thank you for picking this up! I started to review it, but the
documentation changes didn't build, and a few tests in
check-world are
failing. Would you mind resolving those issues? Also, if you
haven't
already, please add an entry to the next commitfest [0] to
ensure that 1)
this feature is tracked and 2) the automated tests will run.
Thanks Nathan for the quick response.
I fixed bugs of documentation changes and check-world in the
latest patch. Now docs are building and check-world is passing.
I added entry into commitfest for this patch.[0]
+ if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s",
pg_dump_bin,
+ dbfile,
create_opts);
+ appendPQExpBufferStr(&cmd, " -F d "); + }Have you given any thought to allowing a directory of custom
format files,
as discussed upthread [1]? Perhaps that is better handled as a
follow-up
patch, but it'd be good to understand the plan, anyway.
I will make these changes and will test. I will update my findings
after doing some testing.
In the latest patch, I added dump and restoring for
directory/custom/tar/plain formats. Please consider this patch for review
and testing.
Design:
When we give --format=d|c|t then we are dumping all global sql
commands in global.dat in plain sql format and we are making a map.dat file
with dbname and dboid. For each database, we are making separate
subdirectory with dboid under databases directory and dumping as per
archive format(d|c|t).
While restoring, first we are restoring all global sql commands from
global.dat and then we are restoring one by one all databases. As we are
supporting --exclude-database with pg_dumpall, the same we are supporting
with pg_restore also to skip restoring on some specified database patterns.
If we want to restore a single database, then we can specided
particular subdirectory from the databases folder. To get file name, we
refer dbname into map.file.
TODO: Now I will work on test cases for these new added options to
the pg_dumpall and pg_restore.
Here, I am attaching the v04 patch for testing and review.
Sorry. My mistake.
v04 was the delta patch on the top of v03.Here, I am attaching the v05 patch for testing and review.
Just FWIW, I did a quick test tonight. It applies cleanly, compiles OK.
I did a dump:
Thanks for testing and review.
$ pg_dumpall -Fd -f dir
and then a restore (after dropping the databases I had):
$ pg_restore -Cd postgres -v dir
It worked really well. That's great.
Quick thing to fix: you've got this error message:
pg_restore: error: -d/--dbanme should be given when using archive dump
of pg_dumpall
I guess it is --dbname, rather than --dbanme.
Fixed.
Of course, it needs much more testing, but this feature would be great
to have. Thanks for working on this!
Apart from above typo, I fixed some review comments those I received
from Andrew in offline discussion. Thanks Andrew for the quick review.Here, I am attaching an updated patch for review and testing.
Hi all,
Based on some testing(dump was shared by Andrew, Thanks Andrew), I fixed
some more bugs in the attached patch.
There are some open points for this patch. I will put those points in
follow-up patches also.
*Point 1*: With pg_dumpall, we have option --exclude-database=PATTERN, and
there we are validating this PATTERN by server because
we have connection but in pg_restore, we don't have some db connection in
some case so how to handle these patterns? or should we use
only NAMES for --exclude-database ?
*Point 2*:
For each database, we are registering entry to on_exit_nicely array due AH
entry but max size of array is MAX_ON_EXIT_NICELY=20,
so after 20 db restoring, we are getting fatal so either my code needs to
reset this array or do we need to increase array size?
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v07_pg_dumpall-with-directory-tar-custom-format-08-jan.patchapplication/octet-stream; name=v07_pg_dumpall-with-directory-tar-custom-format-08-jan.patchDownload
From d6d7f46ef34b75ff3b0af8b46b4bea2ce83fe22e Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 9 Jan 2025 22:37:56 +0530
Subject: [PATCH] g_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 67 +++
doc/src/sgml/ref/pg_restore.sgml | 30 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 171 +++++-
src/bin/pg_dump/pg_restore.c | 871 ++++++++++++++++++++++++++-
9 files changed, 1122 insertions(+), 43 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279258..77a8d6a0c5 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -125,6 +125,73 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as directory so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. This will create a directory
+ with one file for each table and large object being dumped, plus a so-called Table of Contents
+ file describing the dumped objects in a machine-readable format that pg_restore can read. A
+ directory format archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed
+ by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719..ab2e035671 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -315,6 +315,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -932,6 +942,26 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+
</variablelist>
</para>
</refsect1>
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b2..65000e5a08 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844..7153d4a40b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd..d94d0de2a5 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f515..fe37096332 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: ncreasing this to keep 100 db restoring by single restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df95..eae626f621 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c..a2b35e3afb 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -81,6 +84,7 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -147,6 +151,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +193,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +244,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +272,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +423,8 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ archDumpFormat = parseDumpFormat(formatName);
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -497,9 +508,32 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new files with global.dat and map.dat names.
*/
- if (filename)
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /*
+ * If directory/tar/custom format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (!filename || strcmp(filename, "") == 0)
+ pg_fatal("no output directory specified");
+
+ /* TODO: accept the empty existing directory. */
+ if (mkdir(filename, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +641,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +654,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +671,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1523,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1543,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1551,30 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory be
+ * created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ /* Create a map file (to store dboid and dbname) */
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1582,14 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "-f %s/%s", db_subdir, oid);
+
+ /* append dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1599,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1618,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1641,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1654,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1664,38 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain dump, then append file name and dump format to
+ * the pg_dump command.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
+ else
+ pg_fatal("invalid dump format %d, specified, please use d/c/p only", archDumpFormat);
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1994,3 +2090,30 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("invalid dump format \"%s\" specified", format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938..a765d919e2 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,70 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *f_glo);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +120,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +174,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +203,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +230,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +352,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +383,16 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ if (globals_only && opts->cparams.dbname == NULL)
+ pg_fatal("option -g/--globals-only requires option -d/--dbname");
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -395,17 +459,92 @@ main(int argc, char **argv)
opts->format = archDirectory;
break;
+ case 'p':
+ case 'P':
+ break; /* default format */
+
case 't':
case 'T':
opts->format = archTar;
break;
default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
opts->formatName);
}
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
+
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -431,11 +570,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +610,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +623,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +762,719 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *f_glo)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(f_glo)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: %s", strerror(errno));
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and db_oid from line */
+ sscanf(line, "%u %s" , &db_oid, dbname);
+
+ pg_log_info("found dbname as :%s and db_oid:%d in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+
+ ofile = fopen(out_file_path, "w");
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: %s", strerror(errno));
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * Returns true if we can constuct 1st string from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: %s \nCommand was: %s", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
--
2.39.3
in src/bin/pg_dump/pg_dumpall.c main
i think you need do
archDumpFormat = parseDumpFormat(formatName);
/*
* Open the output file if required, otherwise use stdout. If required,
* then create new files with global.dat and map.dat names.
*/
if (archDumpFormat != archNull)
{
char toc_path[MAXPGPATH];
/*
* If directory/tar/custom format is specified then we must provide the
* file name to create one main directory.
*/
if (!filename || strcmp(filename, "") == 0)
pg_fatal("no output directory specified");
/* TODO: accept the empty existing directory. */
if (mkdir(filename, 0700) < 0)
pg_fatal("could not create directory \"%s\": %m",
filename);
snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
OPF = fopen(toc_path, "w");
if (!OPF)
pg_fatal("could not open global.dat file: %s", strerror(errno));
}
else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open output file \"%s\": %m",
filename);
}
else
OPF = stdout;
before connectDatabase call.
otherwise if the cluster is not setting up.
``pg_dumpall --format=d``
error would be about connection error, not
"pg_dumpall: error: no output directory specified"
we want ``pg_dumpall --format`` invalid options
to error out even if the cluster is not setting up.
attached are two invalid option test cases.
you also need change
<varlistentry>
<term><option>-f <replaceable
class="parameter">filename</replaceable></option></term>
<term><option>--file=<replaceable
class="parameter">filename</replaceable></option></term>
<listitem>
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
</para>
</listitem>
</varlistentry>
?
since if --format=d,
<option>--file=<replaceable class="parameter">filename</replaceable></option>
can not be omitted.
Attachments:
v7-0001-misc-tests-for-pg_dumpall.no-cfbotapplication/octet-stream; name=v7-0001-misc-tests-for-pg_dumpall.no-cfbotDownload
From 339e813a137e5c215073b40d508b24ba2312d1ca Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Sat, 11 Jan 2025 13:43:34 +0800
Subject: [PATCH v7 1/1] misc tests for pg_dumpall
---
src/bin/pg_dump/t/001_basic.pl | 4 ++++
src/bin/pg_dump/t/005_pg_dump_filterfile.pl | 8 ++++++++
2 files changed, 12 insertions(+)
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 214240f1ae..2d246e0a50 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -226,4 +226,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: invalid dump format "x" specified\E/,
+ 'pg_dumpall: invalid dump format');
done_testing();
diff --git a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
index 3568a246b2..fecd3478dd 100644
--- a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
+++ b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
@@ -529,6 +529,14 @@ command_fails_like(
#########################################
# pg_restore tests
+command_fails_like(
+ [
+ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only'
+ ],
+ qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
command_ok(
[
--
2.34.1
On Sat, 11 Jan 2025 at 11:19, jian he <jian.universality@gmail.com> wrote:
Thanks Jian for the review and testing.
in src/bin/pg_dump/pg_dumpall.c main
i think you need doarchDumpFormat = parseDumpFormat(formatName);
/*
* Open the output file if required, otherwise use stdout. If required,
* then create new files with global.dat and map.dat names.
*/
if (archDumpFormat != archNull)
{
char toc_path[MAXPGPATH];
/*
* If directory/tar/custom format is specified then we must provide the
* file name to create one main directory.
*/
if (!filename || strcmp(filename, "") == 0)
pg_fatal("no output directory specified");
/* TODO: accept the empty existing directory. */
if (mkdir(filename, 0700) < 0)
pg_fatal("could not create directory \"%s\": %m",
filename);
snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
OPF = fopen(toc_path, "w");
if (!OPF)
pg_fatal("could not open global.dat file: %s", strerror(errno));
}
else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open output file \"%s\": %m",
filename);
}
else
OPF = stdout;before connectDatabase call.
Okay. I will add an error check before connectDatabase call in the next version.
otherwise if the cluster is not setting up.
``pg_dumpall --format=d``
error would be about connection error, not
"pg_dumpall: error: no output directory specified"we want ``pg_dumpall --format`` invalid options
to error out even if the cluster is not setting up.attached are two invalid option test cases.
Thanks.
I am also working on test cases. I will add all error test cases in
the next version and will include these two also.
you also need change
<varlistentry>
<term><option>-f <replaceable
class="parameter">filename</replaceable></option></term>
<term><option>--file=<replaceable
class="parameter">filename</replaceable></option></term>
<listitem>
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
</para>
</listitem>
</varlistentry>
?since if --format=d,
<option>--file=<replaceable class="parameter">filename</replaceable></option>
can not be omitted.
Okay. I will fix it.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Hmm, this patch adds a function connectDatabase() to pg_restore, but a
function that's almost identical already exists in pg_dumpall. I
suggest they should be unified. Maybe create a new file for connection
management routines? (since this clearly doesn't fit common.c nor
dumputils.c).
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
That's because in Europe they call me by name, and in the US by value!"
Thanks Alvaro for quick feedback.
On Sat, 11 Jan 2025 at 2:14 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>
wrote:
Hmm, this patch adds a function connectDatabase() to pg_restore, but a
function that's almost identical already exists in pg_dumpall.
Yes, you are right. Both functions are same as I copied this function from
pg_dumpall.c.
suggest they should be unified. Maybe create a new file for connection
management routines? (since this clearly doesn't fit common.c nor
dumputils.c).
Sure. I will create a new file and I will move these common functions into
that.
Thanks and regards
Mahendra Singh Thalor
https://www.EnterpriseDB.com/
Show quoted text
--
Álvaro Herrera 48°01'N 7°57'E —
https://www.EnterpriseDB.com/
"In Europe they call me Niklaus Wirth; in the US they call me Nickel's
worth.
That's because in Europe they call me by name, and in the US by value!"
hi.
the following two tests, you can add to src/bin/pg_dump/t/001_basic.pl
command_fails_like(
[ 'pg_restore', '--globals-only', '-f', 'xxx' ],
qr/\Qpg_restore: error: option -g\/--globals-only requires option
-d\/--dbname\E/,
'pg_restore: error: option -g/--globals-only requires option -d/--dbname'
);
command_fails_like(
[ 'pg_restore', '--globals-only', '--file=xxx', '--exclude-database=x',],
qr/\Qpg_restore: error: option --exclude-database cannot be used
together with -g\/--globals-only\E/,
'pg_restore: error: option --exclude-database cannot be used
together with -g/--globals-only'
);
in pg_restore.sgml.
<varlistentry>
<term><option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option></term>
<listitem>
the position should right after
<varlistentry>
<term><option>-d <replaceable
class="parameter">dbname</replaceable></option></term>
<term><option>--dbname=<replaceable
class="parameter">dbname</replaceable></option></term>
should
pg_restore --globals-only
pg_restore --exclude-database=pattern
be in a separate patch?
i am also wondering what will happen:
pg_restore --exclude-database=pattern --dbname=pattern
On Sat, 11 Jan 2025 at 9:30 PM, jian he <jian.universality@gmail.com> wrote:
hi.
the following two tests, you can add to src/bin/pg_dump/t/001_basic.plcommand_fails_like(
[ 'pg_restore', '--globals-only', '-f', 'xxx' ],
qr/\Qpg_restore: error: option -g\/--globals-only requires option
-d\/--dbname\E/,
'pg_restore: error: option -g/--globals-only requires option
-d/--dbname'
);
command_fails_like(
[ 'pg_restore', '--globals-only', '--file=xxx',
'--exclude-database=x',],
qr/\Qpg_restore: error: option --exclude-database cannot be used
together with -g\/--globals-only\E/,
'pg_restore: error: option --exclude-database cannot be used
together with -g/--globals-only'
);in pg_restore.sgml.
<varlistentry>
<term><option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option></term>
<listitem>
the position should right after
<varlistentry>
<term><option>-d <replaceable
class="parameter">dbname</replaceable></option></term>
<term><option>--dbname=<replaceable
class="parameter">dbname</replaceable></option></term>should
pg_restore --globals-only
pg_restore --exclude-database=pattern
be in a separate patch?i am also wondering what will happen:
pg_restore --exclude-database=pattern --dbname=pattern
For restore, we will make server connection with ‘pattern’ database and we
will skip restoring for ‘pattern’ database as we are giving ‘pattern’ with
—exclude-database.
With server connection, we will restore global.dat at the start of
pg_restore.
Thanks and regards
Mahendra Singh Thalor
EDB postgres
Show quoted text
Thanks Alvaro and Jian for the review.
otherwise if the cluster is not setting up.
``pg_dumpall --format=d``
error would be about connection error, not
"pg_dumpall: error: no output directory specified"we want ``pg_dumpall --format`` invalid options
to error out even if the cluster is not setting up.
Fixed. Apart from this, added handling to support empty directory also
with --file option.
you also need change
<varlistentry>
<term><option>-f <replaceable
class="parameter">filename</replaceable></option></term>
<term><option>--file=<replaceable
class="parameter">filename</replaceable></option></term>
<listitem>
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
</para>
</listitem>
</varlistentry>
?since if --format=d,
<option>--file=<replaceable class="parameter">filename</replaceable></option>
can not be omitted.
No, we don't need this change. With --fromat=d, we can omit the --file option.
On Sat, 11 Jan 2025 at 14:14, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hmm, this patch adds a function connectDatabase() to pg_restore, but a
function that's almost identical already exists in pg_dumpall. I
suggest they should be unified. Maybe create a new file for connection
management routines? (since this clearly doesn't fit common.c nor
dumputils.c).
I will make a new file in follow-up patches.
On Sat, 11 Jan 2025 at 21:38, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sat, 11 Jan 2025 at 9:30 PM, jian he <jian.universality@gmail.com> wrote:
hi.
the following two tests, you can add to src/bin/pg_dump/t/001_basic.plcommand_fails_like(
[ 'pg_restore', '--globals-only', '-f', 'xxx' ],
qr/\Qpg_restore: error: option -g\/--globals-only requires option
-d\/--dbname\E/,
'pg_restore: error: option -g/--globals-only requires option -d/--dbname'
I removed this error form code as we can dump global sql commands in file also.
);
command_fails_like(
[ 'pg_restore', '--globals-only', '--file=xxx', '--exclude-database=x',],
qr/\Qpg_restore: error: option --exclude-database cannot be used
together with -g\/--globals-only\E/,
'pg_restore: error: option --exclude-database cannot be used
together with -g/--globals-only'
);
Fixed.
in pg_restore.sgml.
<varlistentry>
<term><option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option></term>
<listitem>
the position should right after
<varlistentry>
<term><option>-d <replaceable
class="parameter">dbname</replaceable></option></term>
<term><option>--dbname=<replaceable
class="parameter">dbname</replaceable></option></term>
Fixed.
should
pg_restore --globals-only
pg_restore --exclude-database=pattern
be in a separate patch?
I think we can keep these 2 options in one patch only as both are for
pg_restore and there are not many code changes.
If we want, we can make separate patches for pg_dumpall and pg_restore options.
i am also wondering what will happen:
pg_restore --exclude-database=pattern --dbname=pattern
For restore, we will make server connection with ‘pattern’ database
and we will skip restoring for ‘pattern’ database as we are giving
‘pattern’ with --exclude-database.
With server connection, we will restore global.dat at the start of
pg_restore and for each database, we will fire the db creation command
from 'pattern' db.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v08_pg_dumpall-with-directory-tar-custom-format-12-jan.patchapplication/octet-stream; name=v08_pg_dumpall-with-directory-tar-custom-format-12-jan.patchDownload
From 9860d275f95a482d1a23a9f327c8e6cdf43be661 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Sun, 12 Jan 2025 02:37:13 +0530
Subject: [PATCH] p_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study.
TODO5: move common code of pg_dumpall and pg_restore to new file
Ex: connectDatabase function, parseDump etc.
---
doc/src/sgml/ref/pg_dumpall.sgml | 74 ++
doc/src/sgml/ref/pg_restore.sgml | 29 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 233 +++++-
src/bin/pg_dump/pg_restore.c | 872 +++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 4 +
src/bin/pg_dump/t/005_pg_dump_filterfile.pl | 8 +
11 files changed, 1203 insertions(+), 43 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f2792589..7cebaadbfc5 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -125,6 +125,80 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be subdirectory with dboid name
+ for each database.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..ba2913b3356 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -166,6 +166,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +334,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..7153d4a40b6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..f70ea9233fe 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df956..eae626f6213 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..aadf7fa911d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -29,6 +30,7 @@
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -64,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -81,6 +84,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -147,6 +152,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +194,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +245,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +273,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +424,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--filename with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -497,9 +522,23 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
*/
- if (filename)
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory and accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +646,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +659,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +676,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1528,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1548,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1556,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1522,6 +1590,18 @@ dumpDatabases(PGconn *conn)
if (strcmp(dbname, "template0") == 0)
continue;
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
/* Skip any explicitly excluded database */
if (simple_string_list_member(&database_exclude_names, dbname))
{
@@ -1531,7 +1611,8 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1630,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1653,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1666,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1676,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1994,3 +2100,82 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("invalid dump format \"%s\" specified", format);
+
+ return archDumpFormat;
+}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+ }
+
+ if (!is_empty && mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ dirname);
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938a..04310534402 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,70 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static PGconn *connectDatabase(const char *dbname, const char *conn_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error);
+static PGresult *executeQuery(PGconn *conn, const char *query);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +120,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +174,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +203,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +230,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +352,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +383,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -395,17 +456,92 @@ main(int argc, char **argv)
opts->format = archDirectory;
break;
+ case 'p':
+ case 'P':
+ break; /* default format */
+
case 't':
case 'T':
opts->format = archTar;
break;
default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
opts->formatName);
}
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
+
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -431,11 +567,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +607,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +620,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +759,723 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ */
+static PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ static int server_version;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version = PQserverVersion(conn);
+ if (server_version == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version
+ && (server_version < 90200 ||
+ (server_version / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+static PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: \"%s\"", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: \"%s\"", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dbname and db_oid from line */
+ sscanf(line, "%u \"%s\"" , &db_oid, dbname);
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%d in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "\"%s\"", outfile);
+
+ ofile = fopen(out_file_path, "w");
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * This uses substring function to make 1st string from pattern.
+ * Outstring of substring function is compared with 1st string to
+ * validate this pattern.
+ *
+ * Returns true if 1st string can be constructed from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 214240f1ae5..2d246e0a502 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -226,4 +226,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: invalid dump format "x" specified\E/,
+ 'pg_dumpall: invalid dump format');
done_testing();
diff --git a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
index 3568a246b23..fecd3478dde 100644
--- a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
+++ b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
@@ -529,6 +529,14 @@ command_fails_like(
#########################################
# pg_restore tests
+command_fails_like(
+ [
+ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only'
+ ],
+ qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
command_ok(
[
--
2.39.3
On Sun, Jan 12, 2025 at 5:31 AM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:
you also need change
<varlistentry>
<term><option>-f <replaceable
class="parameter">filename</replaceable></option></term>
<term><option>--file=<replaceable
class="parameter">filename</replaceable></option></term>
<listitem>
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
</para>
</listitem>
</varlistentry>
?since if --format=d,
<option>--file=<replaceable class="parameter">filename</replaceable></option>
can not be omitted.No, we don't need this change. With --fromat=d, we can omit the --file option.
I think this is not correct. since the following three will fail.
$BIN6/pg_dumpall --format=custom --exclude-database=*template* --schema-only
$BIN6/pg_dumpall --format=directory --exclude-database=*template* --schema-only
$BIN6/pg_dumpall --format=tar --exclude-database=*template* --schema-only
that means, pg_dumpall, when format is {custom|directory|tar} --file
option cannot be omitted.
you introduced a format p(plain) for pg_restore? since
$BIN6/pg_restore --dbname=src6 --format=p
will not error out.
but doc/src/sgml/ref/pg_restore.sgml didn't mention this format.
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " -F d ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " -F c ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " -F t ");
can we use long format, i think that would improve the readability.
like changing from
appendPQExpBufferStr(&cmd, " -F d ");
to
appendPQExpBufferStr(&cmd, " --format=directory");
------------------------<<>>>------
I have tested {pg_dump && pg_restore --list}. pg_restore --list works
fine with format {directory|custom|tar}
but it seems there may be some problems with {pg_dumpall && pg_restore
--list} where format is not plain.
with your v08 patch, in my local environment.
$BIN6/pg_dumpall --format=custom --exclude-database=*template*
--schema-only --file=dumpall_src6.custom
$BIN6/pg_restore --dbname=src6 --verbose --schema-only --list
$SRC6/dumpall_src6.custom
error:
pg_restore: error: option -C/--create should be specified when using
dump of pg_dumpall
$BIN6/pg_restore --dbname=src6 --create --verbose --schema-only --list
$SRC6/dumpall_src6.custom
following is some of the output:
pg_restore: found dbname as : "`s3or" and db_oid:1 in map.dat file
while restoring
pg_restore: found dbname as : "`s3or" and db_oid:5 in map.dat file
while restoring
pg_restore: found total 2 database names in map.dat file
pg_restore: needs to restore 2 databases out of 2 databases
pg_restore: restoring database "`s3or"
pg_restore: error: could not open input file
"/home/jian/Desktop/pg_src/src6/postgres/dumpall_src6.custom/databases/1":
No such file or directory
Thanks Jian for the review and testing.
On Wed, 15 Jan 2025 at 14:29, jian he <jian.universality@gmail.com> wrote:
On Sun, Jan 12, 2025 at 5:31 AM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:you also need change
<varlistentry>
<term><option>-f <replaceable
class="parameter">filename</replaceable></option></term>
<term><option>--file=<replaceable
class="parameter">filename</replaceable></option></term>
<listitem>
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
</para>
</listitem>
</varlistentry>
?since if --format=d,
<option>--file=<replaceable class="parameter">filename</replaceable></option>
can not be omitted.No, we don't need this change. With --fromat=d, we can omit the --file option.
I think this is not correct. since the following three will fail.
$BIN6/pg_dumpall --format=custom --exclude-database=*template* --schema-only
$BIN6/pg_dumpall --format=directory --exclude-database=*template* --schema-only
$BIN6/pg_dumpall --format=tar --exclude-database=*template* --schema-onlythat means, pg_dumpall, when format is {custom|directory|tar} --file
option cannot be omitted.
Thanks. I got your point. I added one note for this case in the attached patch.
you introduced a format p(plain) for pg_restore? since
$BIN6/pg_restore --dbname=src6 --format=p
will not error out.
but doc/src/sgml/ref/pg_restore.sgml didn't mention this format.
Yes, I will do more doc changes and will modify some comments in code
as per new options.
+ if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " -F d "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " -F c "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " -F t "); can we use long format, i think that would improve the readability. like changing from appendPQExpBufferStr(&cmd, " -F d "); to appendPQExpBufferStr(&cmd, " --format=directory");
Fixed. In the whole file, we are using shortcuts for other options
also but as per your comment, I made the changes.
------------------------<<>>>------
I have tested {pg_dump && pg_restore --list}. pg_restore --list works
fine with format {directory|custom|tar}
but it seems there may be some problems with {pg_dumpall && pg_restore
--list} where format is not plain.with your v08 patch, in my local environment.
$BIN6/pg_dumpall --format=custom --exclude-database=*template*
--schema-only --file=dumpall_src6.custom$BIN6/pg_restore --dbname=src6 --verbose --schema-only --list
$SRC6/dumpall_src6.custom
error:
pg_restore: error: option -C/--create should be specified when using
dump of pg_dumpall$BIN6/pg_restore --dbname=src6 --create --verbose --schema-only --list
$SRC6/dumpall_src6.custom
following is some of the output:pg_restore: found dbname as : "`s3or" and db_oid:1 in map.dat file
while restoring
pg_restore: found dbname as : "`s3or" and db_oid:5 in map.dat file
while restoring
pg_restore: found total 2 database names in map.dat file
pg_restore: needs to restore 2 databases out of 2 databases
pg_restore: restoring database "`s3or"
pg_restore: error: could not open input file
"/home/jian/Desktop/pg_src/src6/postgres/dumpall_src6.custom/databases/1":
No such file or directory
Fixed.
On Sat, 11 Jan 2025 at 14:14, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hmm, this patch adds a function connectDatabase() to pg_restore, but a
function that's almost identical already exists in pg_dumpall. I
suggest they should be unified. Maybe create a new file for connection
management routines? (since this clearly doesn't fit common.c nor
dumputils.c).
Fixed. I made a new file with common_dumpall_restore.c and have moved
all common functions into the new file.
Apart from this, I added handling for some special database names in
the map.dat file. ex: "database name is one"
Here, I am attaching an updated patch for review and testing.
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v09_pg_dumpall-with-directory-tar-custom-format-16-jan.patchapplication/octet-stream; name=v09_pg_dumpall-with-directory-tar-custom-format-16-jan.patchDownload
From 6d57301df44a7eb04020b2ef38349681a64c134c Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 16 Jan 2025 01:25:52 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study.
---
doc/src/sgml/ref/pg_dumpall.sgml | 77 ++-
doc/src/sgml/ref/pg_restore.sgml | 29 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 314 +++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 473 ++++++--------
src/bin/pg_dump/pg_restore.c | 686 +++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 4 +
src/bin/pg_dump/t/005_pg_dump_filterfile.pl | 8 +
14 files changed, 1323 insertions(+), 328 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f2792589..2be8f7bd8ea 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -121,7 +121,82 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when --format=p|plain.
+ </para>
+ </listitem>
+ </varlistentry>
+
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be subdirectory with dboid name
+ for each database.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..ba2913b3356 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -166,6 +166,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +334,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..ace5077085c
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,314 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * this is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/* ----------
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ * ----------
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a27c3e9fb89
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname,
+ const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern ArchiveFormat parseDumpFormat(const char *format);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..7153d4a40b6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..f70ea9233fe 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df956..eae626f6213 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..4bb0c8030e2 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,24 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +107,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +121,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +145,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +187,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +238,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +266,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +417,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--filename with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -468,7 +486,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +495,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -497,9 +518,23 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
*/
- if (filename)
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory and accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
{
OPF = fopen(filename, PG_BINARY_W);
if (!OPF)
@@ -607,7 +642,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +655,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +672,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1524,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1544,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1552,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,9 +1593,22 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1626,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1649,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1662,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1672,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1751,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1846,50 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+ }
+
+ if (!is_empty && mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m",
+ dirname);
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938a..4bece208db4 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -41,27 +41,67 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +117,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +171,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +200,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +227,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +349,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +380,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -383,28 +441,80 @@ main(int argc, char **argv)
if (opts->formatName)
{
- switch (opts->formatName[0])
+ opts->format = parseDumpFormat(opts->formatName);
+ }
+
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
{
- case 'c':
- case 'C':
- opts->format = archCustom;
- break;
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * To restore multiple databases, create database option should be
+ * specified.
+ */
+ if (opts->createDB != 1)
+ pg_fatal("option -C/--create should be specified when using dump of pg_dumpall");
+
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
- case 'd':
- case 'D':
- opts->format = archDirectory;
- break;
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
- case 't':
- case 'T':
- opts->format = archTar;
- break;
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
- default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
- }
- }
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
AH = OpenArchive(inputFileSpec, opts->format);
@@ -431,11 +541,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +581,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +594,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +733,529 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "\"%s\"", outfile);
+
+ ofile = fopen(out_file_path, "w");
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * This uses substring function to make 1st string from pattern.
+ * Outstring of substring function is compared with 1st string to
+ * validate this pattern.
+ *
+ * Returns true if 1st string can be constructed from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..d42e8bdebbf
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -226,4 +226,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
index 3568a246b23..fecd3478dde 100644
--- a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
+++ b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
@@ -529,6 +529,14 @@ command_fails_like(
#########################################
# pg_restore tests
+command_fails_like(
+ [
+ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only'
+ ],
+ qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
command_ok(
[
--
2.39.3
hi.
in master src/bin/pg_dump/pg_restore.c: main function
if (opts->tocSummary)
PrintTOCSummary(AH);
else
{
ProcessArchiveRestoreOptions(AH);
RestoreArchive(AH);
}
opts->tocSummary is true (pg_restore --list), no query will be executed.
but your patch (pg_restore --list) may call execute_global_sql_commands,
which executes a query.
sscanf(line, "%u" , &db_oid);
sscanf(line, "%s" , db_oid_str);
i think it would be better
sscanf(line, "%u %s" , &db_oid, db_oid_str);
in doc/src/sgml/ref/pg_dumpall.sgml
Note: This option can be omitted only when --format=p|plain.
maybe change to
Note: This option can be omitted only when <option>--format</option> is plain.
--format=format section:
""
Under this databases subdirectory, there will be subdirectory with
dboid name for each database.
""
this sentence is not correct? because
drwxr-xr-x databases
.rw-rw-r-- global.dat
.rw-rw-r-- map.dat
"databases" is a directory, and under the "database" directory, it's a
list of files.
each file filename is corresponding to a unique database name
so there is no subdirectory under subdirectory?
in src/bin/pg_dump/meson.build
you need add 'common_dumpall_restore.c', to the pg_dump_common_sources section.
otherwise meson build cannot compile.
$BIN6/pg_restore --dbname=src6 --verbose --list $SRC6/dumpall.custom6
pg_restore: error: option -C/--create should be specified when using
dump of pg_dumpall
this command should not fail?
in doc/src/sgml/ref/pg_restore.sgml
<varlistentry>
...
<term><option>--format=<replaceable
class="parameter">format</replaceable></option></term>
also need
<term><literal>plain</literal></term>
?
hi.
$BIN6/pg_dumpall --format=directory --verbose --file=test1
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_dumpall: error: could not create directory "test1": File exists
we should first validate --file option, if not ok error out immediately.
if ok then connect to db then run the sql query?
create_or_open_dir also needs to change.
The attached is the minor change I came up with.
Attachments:
v9-0001-minor-refactor-pg_dumpall.no-cfbotapplication/octet-stream; name=v9-0001-minor-refactor-pg_dumpall.no-cfbotDownload
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf..99ae774b32 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -16,6 +16,7 @@ pg_dump_common_sources = files(
'pg_backup_null.c',
'pg_backup_tar.c',
'pg_backup_utils.c',
+ 'common_dumpall_restore.c',
)
pg_dump_common = static_library('libpgdump_common',
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 4bb0c8030e..8409359b9b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -478,6 +478,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -517,33 +544,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout. If required,
- * then create new directory and global.dat file.
- */
- if (archDumpFormat != archNull)
- {
- char toc_path[MAXPGPATH];
-
- /* Create new directory and accept the empty existing directory. */
- create_or_open_dir(filename);
-
- snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
-
- OPF = fopen(toc_path, "w");
- if (!OPF)
- pg_fatal("could not open global.dat file: %s", strerror(errno));
- }
- else if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -1887,9 +1887,17 @@ create_or_open_dir(const char *dirname)
pg_fatal("could not close directory \"%s\": %m",
dirname);
}
- }
- if (!is_empty && mkdir(dirname, 0700) < 0)
- pg_fatal("could not create directory \"%s\": %m",
- dirname);
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
}
Thanks Jian.
On Thu, 16 Jan 2025 at 14:14, jian he <jian.universality@gmail.com> wrote:
hi.
$BIN6/pg_dumpall --format=directory --verbose --file=test1
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_dumpall: error: could not create directory "test1": File existswe should first validate --file option, if not ok error out immediately.
if ok then connect to db then run the sql query?create_or_open_dir also needs to change.
The attached is the minor change I came up with.
As per your comment and suggestions, I merged the delta patch. I
think, many places we are validating files after connection also.
}
opts->tocSummary is true (pg_restore --list), no query will be executed.
but your patch (pg_restore --list) may call execute_global_sql_commands,
which executes a query.
Okay. I will do more study for this case.
sscanf(line, "%u" , &db_oid);
sscanf(line, "%s" , db_oid_str);
i think it would be better
sscanf(line, "%u %s" , &db_oid, db_oid_str);
No, we can't use this as dbname can be complex with multiple spaces.
Ex: create database "database db is long string";
If we use %s, it will read only the first string till space.
We can use something like: sscanf("%u %2000[^\n]s", &db_oid, db_oid_str);
in doc/src/sgml/ref/pg_dumpall.sgml
Note: This option can be omitted only when --format=p|plain.
maybe change to
Note: This option can be omitted only when <option>--format</option> is plain.
Fixed.
--format=format section:
""
Under this databases subdirectory, there will be subdirectory with
dboid name for each database.
""
this sentence is not correct? because
drwxr-xr-x databases
.rw-rw-r-- global.dat
.rw-rw-r-- map.dat"databases" is a directory, and under the "database" directory, it's a
list of files.
each file filename is corresponding to a unique database name
so there is no subdirectory under subdirectory?
If it is a directory format, then we will create a subdirectory. I did
some modifications to this para in the latest patch.
in src/bin/pg_dump/meson.build
you need add 'common_dumpall_restore.c', to the pg_dump_common_sources section.
otherwise meson build cannot compile.
I think we should not add under pg_dump_common_sources, rather we
should add it into pg_dumpall and pg_restore only.
I added this.
$BIN6/pg_restore --dbname=src6 --verbose --list $SRC6/dumpall.custom6
pg_restore: error: option -C/--create should be specified when using
dump of pg_dumpall
this command should not fail?
If a dump has multiple databases, then we should use -C option
otherwise all dumps will be restored in a single db. As of
now I removed this error and changed this to pg_log_info.
in doc/src/sgml/ref/pg_restore.sgml
<varlistentry>
...
<term><option>--format=<replaceable
class="parameter">format</replaceable></option></term>
also need
<term><literal>plain</literal></term>
?
plain format is not supported with pg_restore. I added an error for this format.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v10_pg_dumpall-with-directory-tar-custom-format-17-jan.patchapplication/octet-stream; name=v10_pg_dumpall-with-directory-tar-custom-format-17-jan.patchDownload
From 05241e17694579b6c4fbc54f07e0ace05c0fcd23 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 16 Jan 2025 23:51:40 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 ++-
doc/src/sgml/ref/pg_restore.sgml | 29 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 314 +++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 503 ++++++--------
src/bin/pg_dump/pg_restore.c | 695 +++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 4 +
src/bin/pg_dump/t/005_pg_dump_filterfile.pl | 8 +
15 files changed, 1353 insertions(+), 340 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f2792589..8ca49a65977 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..ba2913b3356 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -166,6 +166,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +334,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..ace5077085c
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,314 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * this is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/* ----------
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ * ----------
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a27c3e9fb89
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname,
+ const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern ArchiveFormat parseDumpFormat(const char *format);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..7153d4a40b6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..f70ea9233fe 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df956..eae626f6213 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..8409359b9b4 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,24 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +107,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +121,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +145,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +187,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +238,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +266,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +417,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--filename with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +478,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, "w");
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +513,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +522,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +544,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +642,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +655,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +672,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1524,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1544,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1552,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, "w");
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,9 +1593,22 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1549,20 +1626,21 @@ dumpDatabases(PGconn *conn)
{
create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
}
else
create_opts = "--create";
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if ((archDumpFormat == archNull) && filename)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1571,6 +1649,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1662,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1672,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1751,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1846,58 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938a..e7955065fcd 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,67 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +117,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +171,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +200,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +227,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +349,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +380,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -383,28 +441,80 @@ main(int argc, char **argv)
if (opts->formatName)
{
- switch (opts->formatName[0])
+ opts->format = parseDumpFormat(opts->formatName);
+
+ /* Plain format is not supported for pg_restore. */
+ if (opts->format == archNull)
{
- case 'c':
- case 'C':
- opts->format = archCustom;
- break;
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ opts->formatName);
+ }
+ }
- case 'd':
- case 'D':
- opts->format = archDirectory;
- break;
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat and map.dat files. If both files are present, then
+ * restore all the databases from map.dat file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
- case 't':
- case 'T':
- opts->format = archTar;
- break;
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
- default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
- }
- }
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
AH = OpenArchive(inputFileSpec, opts->format);
@@ -431,11 +541,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +581,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +594,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +733,536 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, "r");
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "\"%s\"", outfile);
+
+ ofile = fopen(out_file_path, "w");
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * This uses substring function to make 1st string from pattern.
+ * Outstring of substring function is compared with 1st string to
+ * validate this pattern.
+ *
+ * Returns true if 1st string can be constructed from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ return true;
+ else
+ return false;
+ }
+ }
+ else
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..d42e8bdebbf
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -226,4 +226,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
index 3568a246b23..fecd3478dde 100644
--- a/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
+++ b/src/bin/pg_dump/t/005_pg_dump_filterfile.pl
@@ -529,6 +529,14 @@ command_fails_like(
#########################################
# pg_restore tests
+command_fails_like(
+ [
+ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only'
+ ],
+ qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
command_ok(
[
--
2.39.3
hi.
some minor issues come to my mind when I look at it again.
looking at set_null_conf,
i think "if (archDumpFormat != archNull)" can be:
if (archDumpFormat != archNull)
{
OPF = fopen(toc_path, "w");
if (!OPF)
pg_fatal("could not open global.dat file: \"%s\" for writing: %m",
toc_path);
}
some places we use ``fopen(filename, PG_BINARY_W)``,
some places we use ``fopen(filename, "w");``
kind of inconsistent...
+ printf(_(" -F, --format=c|d|t|p output file format
(custom, directory, tar,\n"
+ " plain text (default))\n"));
this indentation level is not right?
if we look closely at the surrounding output of `pg_dumpall --help`.
pg_dump.sgml --create option description:
This option is ignored when emitting an archive (non-text) output file. For the
archive formats, you can specify the option when you call pg_restore.
in runPgDump, we have:
/*
* If this is non-plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
dbfile, create_opts);
...
}
so in here, create_opts is not necessary per pg_dump.sgml above description.
we can simplify it as:
if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" --file=%s", pg_dump_bin, dbfile);
}
?
hi.
$BIN10/pg_restore --globals-only --verbose --file=test.sql x.dump
it will create a "test.sql" file, but it should create file test.sql
(no double quotes).
------<>>>>------
if (archDumpFormat != archNull &&
(!filename || strcmp(filename, "") == 0))
{
pg_log_error("options -F/--format=d|c|t requires option
-f/--filename with non-empty string");
...
}
here, it should be
pg_log_error("options -F/--format=d|c|t requires option -f/--file with
non-empty string");
------<>>>>------
the following pg_dumpall, pg_restore not working.
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_restore --file=3.sql x1.dump
ERROR: pg_restore: error: directory "x1.dump" does not appear to be a
valid archive ("toc.dat" does not exist)
these two also not working:
$BIN10/pg_dumpall --format=custom --file=x1.dump --verbose --globals-only
$BIN10/pg_restore --file=3.sql --format=custom x1.dump
error message:
pg_restore: error: could not read from input file: Is a directory
------<>>>>------
IsFileExistsInDirectory function is the same as _fileExistsInDirectory.
Can we make _fileExistsInDirectory extern function?
+ /* If global.dat and map.dat are exist, then proces them. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat")
+ && IsFileExistsInDirectory(pg_strdup(inputFileSpec),
"map.dat"))
+ {
comment typo, "proces" should "process".
here, we don't need pg_strdup?
------<>>>>------
# pg_restore tests
+command_fails_like(
+ [
+ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only'
+ ],
+ qr/\Qg_restore: error: option --exclude-database cannot be used
together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together
with -g/--globals-only');
We can put the above test on src/bin/pg_dump/t/001_basic.pl,
since validating these conflict options don't need a cluster to be set up.
typedef struct SimpleDatabaseOidListCell
and
typedef struct SimpleDatabaseOidList
need also put into src/tools/pgindent/typedefs.list
hi.
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+ result = executeQuery(conn, query->data);
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr;
+
+ outstr = PQgetvalue(result, 0, 0);
i think here you should use PQgetisnull(result, 0, 0)
?
example: pg_dumpall and pg_restore:
$BIN10/pg_dumpall --verbose --format=custom --file=x12.dump
$BIN10/pg_restore --verbose --dbname=src10 x12.dump
some log message for the above command:
pg_restore: found dbname as : "template1" and db_oid:1 in map.dat file
while restoring
pg_restore: found dbname as : "s1" and db_oid:17960 in map.dat file
while restoring
pg_restore: found dbname as : "src10" and db_oid:5 in map.dat file
while restoring
pg_restore: found total 3 database names in map.dat file
pg_restore: needs to restore 3 databases out of 3 databases
pg_restore: restoring dump of pg_dumpall without -C option, there
might be multiple databases in directory.
pg_restore: restoring database "template1"
pg_restore: connecting to database for restore
pg_restore: implied data-only restore
pg_restore: restoring database "s1"
pg_restore: connecting to database for restore
pg_restore: processing data for table "public.t"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3376; 0 17961 TABLE DATA t jian
pg_restore: error: could not execute query: ERROR: relation
"public.t" does not exist
Command was: COPY public.t (a) FROM stdin;
1. message: "pg_restore: implied data-only restore"
Normally pg_dump and pg_restore will dump the schema and the data,
then when we are connecting to the same database with pg_restore,
there will be lots of schema elements already exists ERROR.
but the above command case, pg_restore only restores the content/data
not schema, that's why there is very little error happening.
so here pg_restore not restore schema seems not ok?
2. pg_dumpall with non-text mode, we don't have \connect command in
file global.dat or map.dat
I have database "s1" with table "public.t".
if I create a table src10.public.t (database.schema.table) with column a.
then pg_restore will restore content of s1.public.t (database s1) to
src10.public.t (database src10).
in ConnectDatabase(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
i added
if (cparams->dbname)
fprintf(stderr, "pg_backup_db.c:%d %s called connecting to %s
now\n", __LINE__, __func__, cparams->dbname);
to confirm that we are connecting the same database "src10", while
dumping all the contents in x12.dump.
Thanks Jian for the detailed review and testing.
On Mon, 20 Jan 2025 at 21:32, jian he <jian.universality@gmail.com> wrote:
hi.
some minor issues come to my mind when I look at it again.looking at set_null_conf,
i think "if (archDumpFormat != archNull)" can be:if (archDumpFormat != archNull)
{
OPF = fopen(toc_path, "w");
if (!OPF)
pg_fatal("could not open global.dat file: \"%s\" for writing: %m",
toc_path);
}some places we use ``fopen(filename, PG_BINARY_W)``,
some places we use ``fopen(filename, "w");``
kind of inconsistent...
Fixed. We should use PG_BINARY_W/PG_BINARY_R.
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n" + " plain text (default))\n")); this indentation level is not right? if we look closely at the surrounding output of `pg_dumpall --help`.
Fixed.
pg_dump.sgml --create option description:
This option is ignored when emitting an archive (non-text) output file. For the
archive formats, you can specify the option when you call pg_restore.in runPgDump, we have:
/*
* If this is non-plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
dbfile, create_opts);
...
}so in here, create_opts is not necessary per pg_dump.sgml above description.
we can simplify it as:if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" --file=%s", pg_dump_bin, dbfile);
}
?
We are already using the same code without this patch also. I haven't
tested this without create_opts. I think, if your theory is right,
then
you can submit a patch for this change in another thread.
On Tue, 21 Jan 2025 at 09:37, jian he <jian.universality@gmail.com> wrote:
hi.
$BIN10/pg_restore --globals-only --verbose --file=test.sql x.dump
it will create a "test.sql" file, but it should create file test.sql
(no double quotes).
Fixed.
------<>>>>------
if (archDumpFormat != archNull &&
(!filename || strcmp(filename, "") == 0))
{
pg_log_error("options -F/--format=d|c|t requires option
-f/--filename with non-empty string");
...
}
here, it should be
pg_log_error("options -F/--format=d|c|t requires option -f/--file with
non-empty string");
Fixed.
------<>>>>------
the following pg_dumpall, pg_restore not working.
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_restore --file=3.sql x1.dumpERROR: pg_restore: error: directory "x1.dump" does not appear to be a
valid archive ("toc.dat" does not exist)
Fixed.
these two also not working:
$BIN10/pg_dumpall --format=custom --file=x1.dump --verbose --globals-only
$BIN10/pg_restore --file=3.sql --format=custom x1.dump
Fixed.
error message:
pg_restore: error: could not read from input file: Is a directory
Fixed.
------<>>>>------
IsFileExistsInDirectory function is the same as _fileExistsInDirectory.
Can we make _fileExistsInDirectory extern function?
No, we can't make it as we are using this function in different-2 modules.
+ /* If global.dat and map.dat are exist, then proces them. */ + if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat") + && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat")) + { comment typo, "proces" should "process".
Fixed.
here, we don't need pg_strdup?
In most places, we are dumping strings so I kept the same here also.
------<>>>>------ # pg_restore tests +command_fails_like( + [ + 'pg_restore', '-p', $port, '-f', $plainfile, + "--exclude-database=grabadge", + '--globals-only' + ], + qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/, + 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');We can put the above test on src/bin/pg_dump/t/001_basic.pl,
since validating these conflict options don't need a cluster to be set up.
Done.
typedef struct SimpleDatabaseOidListCell
and
typedef struct SimpleDatabaseOidList
need also put into src/tools/pgindent/typedefs.list
Fixed.
On Tue, 21 Jan 2025 at 15:00, jian he <jian.universality@gmail.com> wrote:
hi.
+ printfPQExpBuffer(query, + "SELECT substring ( " + " '%s' , " + " '%s' ) ", str, ptrn); + result = executeQuery(conn, query->data); + if (PQresultStatus(result) == PGRES_TUPLES_OK) + { + if (PQntuples(result) == 1) + { + const char *outstr; + + outstr = PQgetvalue(result, 0, 0); i think here you should use PQgetisnull(result, 0, 0)
Fixed.
example: pg_dumpall and pg_restore:
$BIN10/pg_dumpall --verbose --format=custom --file=x12.dump
$BIN10/pg_restore --verbose --dbname=src10 x12.dumpsome log message for the above command:
pg_restore: found dbname as : "template1" and db_oid:1 in map.dat file
while restoring
pg_restore: found dbname as : "s1" and db_oid:17960 in map.dat file
while restoring
pg_restore: found dbname as : "src10" and db_oid:5 in map.dat file
while restoring
pg_restore: found total 3 database names in map.dat file
pg_restore: needs to restore 3 databases out of 3 databases
pg_restore: restoring dump of pg_dumpall without -C option, there
might be multiple databases in directory.
pg_restore: restoring database "template1"
pg_restore: connecting to database for restore
pg_restore: implied data-only restore
pg_restore: restoring database "s1"
pg_restore: connecting to database for restore
pg_restore: processing data for table "public.t"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3376; 0 17961 TABLE DATA t jian
pg_restore: error: could not execute query: ERROR: relation
"public.t" does not exist
Command was: COPY public.t (a) FROM stdin;1. message: "pg_restore: implied data-only restore"
Normally pg_dump and pg_restore will dump the schema and the data,
then when we are connecting to the same database with pg_restore,
there will be lots of schema elements already exists ERROR.
but the above command case, pg_restore only restores the content/data
not schema, that's why there is very little error happening.
so here pg_restore not restore schema seems not ok?2. pg_dumpall with non-text mode, we don't have \connect command in
file global.dat or map.dat
I have database "s1" with table "public.t".
if I create a table src10.public.t (database.schema.table) with column a.
then pg_restore will restore content of s1.public.t (database s1) to
src10.public.t (database src10).in ConnectDatabase(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
i added
if (cparams->dbname)
fprintf(stderr, "pg_backup_db.c:%d %s called connecting to %s
now\n", __LINE__, __func__, cparams->dbname);
to confirm that we are connecting the same database "src10", while
dumping all the contents in x12.dump.
I will do some more study for this and will update. As of now, I added
the "--create" option in the dump.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patchapplication/octet-stream; name=v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patchDownload
From bca342933c95a6739e195d3332584737dd4f64cc Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 22 Jan 2025 00:08:22 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 ++-
doc/src/sgml/ref/pg_restore.sgml | 29 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 314 ++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 516 +++++++----------
src/bin/pg_dump/pg_restore.c | 701 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 11 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1373 insertions(+), 340 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f2792589..8ca49a65977 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..ba2913b3356 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -166,6 +166,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +334,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..ace5077085c
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,314 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * this is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/* ----------
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ * ----------
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a27c3e9fb89
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname,
+ const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern ArchiveFormat parseDumpFormat(const char *format);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..7153d4a40b6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..f70ea9233fe 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8f73a5df956..eae626f6213 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1147,7 +1147,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..5915b1b0516 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,24 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +107,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +121,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +145,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +187,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +238,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +266,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +417,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +478,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +513,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +522,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +544,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +642,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +655,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +672,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1524,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1544,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1552,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1593,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1623,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1642,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1675,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1685,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1764,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1859,58 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938a..79f61395ae3 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,67 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +117,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +171,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +200,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +227,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +349,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +380,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -383,28 +441,80 @@ main(int argc, char **argv)
if (opts->formatName)
{
- switch (opts->formatName[0])
+ opts->format = parseDumpFormat(opts->formatName);
+
+ /* Plain format is not supported for pg_restore. */
+ if (opts->format == archNull)
{
- case 'c':
- case 'C':
- opts->format = archCustom;
- break;
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ opts->formatName);
+ }
+ }
- case 'd':
- case 'D':
- opts->format = archDirectory;
- break;
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
- case 't':
- case 'T':
- opts->format = archTar;
- break;
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
- default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
- }
- }
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
AH = OpenArchive(inputFileSpec, opts->format);
@@ -431,11 +541,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +581,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +594,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +733,542 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("map.dat file is not present in dump of pg_dumpall, so nothing to restore.");
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+
+ ofile = fopen(out_file_path, PG_BINARY_W);
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * This uses substring function to make 1st string from pattern.
+ * Outstring of substring function is compared with 1st string to
+ * validate this pattern.
+ *
+ * Returns true if 1st string can be constructed from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr = NULL;
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (!PQgetisnull(result, 0, 0))
+ outstr = PQgetvalue(result, 0, 0);
+
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ {
+ PQclear(result);
+ destroyPQExpBuffer(query);
+ return true;
+ }
+ }
+ }
+ else
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..96a728adfab
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,13 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '-p', $port, '-f', $plainfile,
+ "--exclude-database=grabadge",
+ '--globals-only' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +233,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d5aa5c295ae..5608512aced 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2672,6 +2672,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
hi.
The four patches attached are to solve the
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
it is based on your v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch
0001. pg_dumpall --exclude-database=PATTERN already works,
main function resolve pattern matching is expand_dbname_patterns.
make it an extern function, so pg_restore --exclude-database can also use it.
0002 cosmetic code changes not in pg_restore.c
0003 cosmetic code changes in pg_restore.c
0004 fully implement pg_restore --exclude-database=PATTERN
similar to pg_dumpall.c
declare two file static variables:
static SimpleStringList database_exclude_names = {NULL, NULL};
static SimpleStringList db_exclude_patterns = {NULL, NULL};
I also deleted the function is_full_pattern.
I use
$BIN10/pg_restore --exclude-database=*x* --exclude-database=*s*
--exclude-database=*t* --verbose --file=test.sql x1.dump
the verbose message to verify my changes.
Attachments:
v11-0002-minor-coesmetic-change-not-in-pg_restore.c.no-cfbotapplication/octet-stream; name=v11-0002-minor-coesmetic-change-not-in-pg_restore.c.no-cfbotDownload
From 05124e5c1d65a0822f823b8991242e9cf89bb755 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Thu, 23 Jan 2025 15:27:32 +0800
Subject: [PATCH v11 2/4] minor coesmetic change not in pg_restore.c
---
src/bin/pg_dump/common_dumpall_restore.h | 9 ++++-----
src/bin/pg_dump/pg_dumpall.c | 2 +-
2 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index 2476449fa7..afc83d5c70 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -16,11 +16,10 @@
#include "pg_backup.h"
-extern PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error,
- const char *progname, const char **connstr, int *server_version);
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
extern PGresult *executeQuery(PGconn *conn, const char *query);
extern ArchiveFormat parseDumpFormat(const char *format);
extern void expand_dbname_patterns(PGconn *conn,
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 244bf72986..1e2949119b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -671,7 +671,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
- " plain text (default))\n"));
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
--
2.34.1
v11-0004-preliminary-work-for-pg_restore-exclude-datab.no-cfbotapplication/octet-stream; name=v11-0004-preliminary-work-for-pg_restore-exclude-datab.no-cfbotDownload
From 77df1aa11496b8ef0f9547144aa5f5c8af6f5e7c Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Thu, 23 Jan 2025 17:03:40 +0800
Subject: [PATCH v11 4/4] preliminary work for pg_restore
--exclude-database=PATTERN
---
src/bin/pg_dump/pg_restore.c | 120 +++++++++--------------------------
1 file changed, 29 insertions(+), 91 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index a715448dce..0a1a7c1ecd 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -71,9 +71,6 @@ typedef struct SimpleDatabaseOidList
SimpleDatabaseOidListCell *tail;
} SimpleDatabaseOidList;
-static void
-simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
-
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool IsFileExistsInDirectory(const char *dir, const char *filename);
@@ -81,22 +78,23 @@ static bool restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+ RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
- const char *outfile);
-static int filter_dbnames_for_restore(PGconn *conn,
- SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+ const char *outfile);
+static int filter_dbnames_for_restore(SimpleDatabaseOidList *dbname_oid_list);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
const char *dbname);
-static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
static void simple_string_full_list_delete(SimpleStringList *list);
static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
SimpleDatabaseOidListCell *cell,
SimpleDatabaseOidListCell *prev);
+static SimpleStringList database_exclude_names = {NULL, NULL};
+static SimpleStringList db_exclude_patterns = {NULL, NULL};
+
int
main(int argc, char **argv)
{
@@ -118,8 +116,7 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
- SimpleStringList db_exclude_patterns = {NULL, NULL};
- bool globals_only = false;
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
@@ -491,8 +488,7 @@ main(int argc, char **argv)
else
{
/* Now restore all the databases from map.dat file. */
- exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
- opts, numWorkers);
+ exit_code = restoreAllDatabases(conn, inputFileSpec, opts, numWorkers);
}
/* Free db pattern list. */
@@ -798,14 +794,13 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
/*
* filter_dbnames_for_restore
*
- * This will remove names from all dblist that are given with exclude-database
- * option.
+ * This will remove database entries from dbname_oid_list that are matching with --exclude-database
+ * pattern.
*
- * returns number of dbnames those will be restored.
+ * returns number of databases that will be restored.
*/
static int
-filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
- SimpleStringList db_exclude_patterns)
+filter_dbnames_for_restore(SimpleDatabaseOidList *dbname_oid_list)
{
SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
SimpleDatabaseOidListCell *dboidprecell = NULL;
@@ -822,10 +817,9 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
SimpleDatabaseOidListCell *next = dboid_cell->next;
/* Now match this dbname with exclude-database list. */
- for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ for (SimpleStringListCell *celldb = database_exclude_names.head; celldb; celldb = celldb->next)
{
- if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
- (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
{
/*
* As we need to skip this dbname so set flag to remove it from
@@ -937,9 +931,13 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
*/
static int
restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts,
- int numWorkers)
+ RestoreOptions *opts, int numWorkers)
{
+ /*
+ * dbname_oid_list initially stores all the databases to be restored from
+ * the map.dat file. It then filters out any databases that match the
+ * --exclude-database pattern.
+ */
SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
SimpleDatabaseOidListCell *dboid_cell;
int exit_code = 0;
@@ -948,12 +946,11 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+ pg_log_info("found total %d database in map.dat file", num_total_db);
/* If map.dat has no entry, return from here. */
if (dbname_oid_list.head == NULL)
return 0;
- pg_log_info("found total %d database names in map.dat file", num_total_db);
-
if (!conn)
{
pg_log_info("trying to connect database \"postgres\" to dump into out file");
@@ -982,18 +979,17 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* Skip any explicitly excluded database. If there is no database
* connection, then just consider pattern as simple name to compare.
*/
- num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
- db_exclude_patterns);
+ expand_dbname_patterns(conn, &db_exclude_patterns, &database_exclude_names);
+ num_db_restore = filter_dbnames_for_restore(&dbname_oid_list);
/* Close the db connection as we are done globals and patterns. */
if (conn)
PQfinish(conn);
- /* Exit if no db needs to be restored. */
- if (dbname_oid_list.head == NULL)
- return 0;
-
pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+ /* Exit if no db needs to be restored. */
+ if (num_db_restore == 0)
+ return 0;
/*
* To restore multiple databases, -C (create database) option should be specified
@@ -1006,7 +1002,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (num_db_restore > 100)
{
simple_db_oid_full_list_delete(&dbname_oid_list);
- pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total databases:%d", num_db_restore);
}
/*
@@ -1042,12 +1038,9 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
exit_code = dbexit_code;
dboid_cell = dboid_cell->next;
- } /* end while */
+ }
- /* Log number of processed databases.*/
- pg_log_info("number of restored databases are %d", num_db_restore);
-
- /* Free dbname and dboid list. */
+ /* Free dbname_oid_list */
simple_db_oid_full_list_delete(&dbname_oid_list);
return exit_code;
@@ -1218,58 +1211,3 @@ simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell
pfree(cell);
}
}
-
-/*
- * is_full_pattern
- *
- * This uses substring function to make 1st string from pattern.
- * Outstring of substring function is compared with 1st string to
- * validate this pattern.
- *
- * Returns true if 1st string can be constructed from given pattern.
- *
- */
-static bool
-is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
-{
- PQExpBuffer query;
- PGresult *result;
-
- query = createPQExpBuffer();
-
- printfPQExpBuffer(query,
- "SELECT substring ( "
- " '%s' , "
- " '%s' ) ", str, ptrn);
-
- result = executeQuery(conn, query->data);
-
- if (PQresultStatus(result) == PGRES_TUPLES_OK)
- {
- if (PQntuples(result) == 1)
- {
- const char *outstr = NULL;
-
- /*
- * If output string of substring function is matches with str, then
- * we can construct str from pattern.
- */
- if (!PQgetisnull(result, 0, 0))
- outstr = PQgetvalue(result, 0, 0);
-
- if (outstr && pg_strcasecmp(outstr, str) == 0)
- {
- PQclear(result);
- destroyPQExpBuffer(query);
- return true;
- }
- }
- }
- else
- pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
-
- PQclear(result);
- destroyPQExpBuffer(query);
-
- return false;
-}
--
2.34.1
v11-0001-move-expand_dbname_patterns-to-common_dumpall.no-cfbotapplication/octet-stream; name=v11-0001-move-expand_dbname_patterns-to-common_dumpall.no-cfbotDownload
From b66c5b341fff463c03312f0317e2b828e9a2d8dd Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Thu, 23 Jan 2025 15:11:55 +0800
Subject: [PATCH v11 1/4] move expand_dbname_patterns to
common_dumpall_restore.c
---
src/bin/pg_dump/common_dumpall_restore.c | 54 ++++++++++++++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 3 ++
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 54 ------------------------
4 files changed, 58 insertions(+), 55 deletions(-)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
index ace5077085..4503a98f25 100644
--- a/src/bin/pg_dump/common_dumpall_restore.c
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -312,3 +312,57 @@ parseDumpFormat(const char *format)
return archDumpFormat;
}
+
+/*
+ * Find a list of database names that match the given patterns.
+ * See also expand_table_name_patterns() in pg_dump.c
+ */
+void
+expand_dbname_patterns(PGconn *conn,
+ SimpleStringList *patterns,
+ SimpleStringList *names)
+{
+ PQExpBuffer query;
+ PGresult *res;
+
+ if (patterns->head == NULL)
+ return; /* nothing to do */
+
+ query = createPQExpBuffer();
+
+ /*
+ * The loop below runs multiple SELECTs, which might sometimes result in
+ * duplicate entries in the name list, but we don't care, since all we're
+ * going to do is test membership of the list.
+ */
+
+ for (SimpleStringListCell *cell = patterns->head; cell; cell = cell->next)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query,
+ "SELECT datname FROM pg_catalog.pg_database n\n");
+ processSQLNamePattern(conn, query, cell->val, false,
+ false, NULL, "datname", NULL, NULL, NULL,
+ &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ cell->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+ for (int i = 0; i < PQntuples(res); i++)
+ {
+ simple_string_list_append(names, PQgetvalue(res, i, 0));
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ destroyPQExpBuffer(query);
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index a27c3e9fb8..2476449fa7 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -23,4 +23,7 @@ extern PGconn *connectDatabase(const char *dbname,
const char *progname, const char **connstr, int *server_version);
extern PGresult *executeQuery(PGconn *conn, const char *query);
extern ArchiveFormat parseDumpFormat(const char *format);
+extern void expand_dbname_patterns(PGconn *conn,
+ SimpleStringList *patterns,
+ SimpleStringList *names);
#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index eae626f621..72d9cefd81 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1653,7 +1653,7 @@ expand_foreign_server_name_patterns(Archive *fout,
/*
* Find the OIDs of all tables matching the given list of patterns,
* and append them to the given OID list. See also expand_dbname_patterns()
- * in pg_dumpall.c
+ * in common_dumpall_restore.c
*/
static void
expand_table_name_patterns(Archive *fout,
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 5915b1b051..244bf72986 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -77,8 +77,6 @@ static void buildShSecLabels(PGconn *conn,
const char *objtype, const char *objname,
PQExpBuffer buffer);
static void executeCommand(PGconn *conn, const char *query);
-static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
- SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static void create_or_open_dir(const char *dirname);
@@ -1466,59 +1464,7 @@ dumpUserConfig(PGconn *conn, const char *username)
destroyPQExpBuffer(buf);
}
-/*
- * Find a list of database names that match the given patterns.
- * See also expand_table_name_patterns() in pg_dump.c
- */
-static void
-expand_dbname_patterns(PGconn *conn,
- SimpleStringList *patterns,
- SimpleStringList *names)
-{
- PQExpBuffer query;
- PGresult *res;
- if (patterns->head == NULL)
- return; /* nothing to do */
-
- query = createPQExpBuffer();
-
- /*
- * The loop below runs multiple SELECTs, which might sometimes result in
- * duplicate entries in the name list, but we don't care, since all we're
- * going to do is test membership of the list.
- */
-
- for (SimpleStringListCell *cell = patterns->head; cell; cell = cell->next)
- {
- int dotcnt;
-
- appendPQExpBufferStr(query,
- "SELECT datname FROM pg_catalog.pg_database n\n");
- processSQLNamePattern(conn, query, cell->val, false,
- false, NULL, "datname", NULL, NULL, NULL,
- &dotcnt);
-
- if (dotcnt > 0)
- {
- pg_log_error("improper qualified name (too many dotted names): %s",
- cell->val);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- res = executeQuery(conn, query->data);
- for (int i = 0; i < PQntuples(res); i++)
- {
- simple_string_list_append(names, PQgetvalue(res, i, 0));
- }
-
- PQclear(res);
- resetPQExpBuffer(query);
- }
-
- destroyPQExpBuffer(query);
-}
/*
* Dump contents of databases.
--
2.34.1
v11-0003-minor-coesmetic-change-in-pg_restore.c.no-cfbotapplication/octet-stream; name=v11-0003-minor-coesmetic-change-in-pg_restore.c.no-cfbotDownload
From fcb2dd75ab9dc1fd3dd72c91f55d1b7b37a70c3f Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Thu, 23 Jan 2025 17:02:31 +0800
Subject: [PATCH v11 3/4] minor coesmetic change in pg_restore.c
---
src/bin/pg_dump/pg_restore.c | 39 ++++++++++++++++++------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 79f61395ae..a715448dce 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -77,8 +77,8 @@ simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *d
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool IsFileExistsInDirectory(const char *dir, const char *filename);
-static bool restoreOneDatabase(const char *inputFileSpec,
- RestoreOptions *opts, int numWorkers, bool append_data);
+static bool restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
@@ -87,14 +87,15 @@ static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
- SimpleDatabaseOidList *dbname_oid_list);
+ SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname);
+ const char *dbname);
static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
static void simple_string_full_list_delete(SimpleStringList *list);
static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
@@ -446,8 +447,8 @@ main(int argc, char **argv)
/* Plain format is not supported for pg_restore. */
if (opts->format == archNull)
{
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
+ pg_fatal("unrecognized archive format \"%s\", one of \"c\", \"d\" or \"t\" must be specified",
+ opts->formatName);
}
}
@@ -458,7 +459,7 @@ main(int argc, char **argv)
* --exclude-database patterns.
*/
if (inputFileSpec != NULL &&
- !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
{
/* If global.dat is exist, then process it. */
if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
@@ -470,8 +471,8 @@ main(int argc, char **argv)
if (opts->cparams.dbname)
{
conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
@@ -498,8 +499,8 @@ main(int argc, char **argv)
simple_string_full_list_delete(&db_exclude_patterns);
return exit_code;
- }/* end if */
- }/* end if */
+ }
+ }
return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
}
@@ -511,7 +512,7 @@ main(int argc, char **argv)
*/
static bool
restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data)
+ int numWorkers, bool append_data)
{
Archive *AH;
bool exit_code;
@@ -955,20 +956,20 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (!conn)
{
- pg_log_info("trying to connect postgres database to dump into out file");
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
/* Try with template1. */
if (!conn)
{
- pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
conn = connectDatabase("template1", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
if (!conn)
pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
--
2.34.1
On Thu, 23 Jan 2025 at 14:59, jian he <jian.universality@gmail.com> wrote:
hi.
The four patches attached are to solve the
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
it is based on your v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch0001. pg_dumpall --exclude-database=PATTERN already works,
main function resolve pattern matching is expand_dbname_patterns.
make it an extern function, so pg_restore --exclude-database can also use it.
Hi Jian,
We can't use the same expand_dbname_patterns function pg_restore.
In the 1st patch, by mistake I also used this function but then I
realised that we should not use this function due to some limitation
for pg_restore.
While doing pg_dumpall, we have all the existence database names in
the pg_database catalog but while restoring, we don't have all
databases in the catalog.
Actually, we will read dbnames from map.dat file to skip matching
patterns for restore.
Ex: let say we have a fresh server with postgres and template1
databases. Now we want to restore one backup
and inside the map.dat file, we have dbname=db_123 and dbname=db_234.
If we want to use --exclude-database=db_123, then
your patch will not work as this db hasn't been created.
Please cross verify again and let me know your feedback.
I think, as of now, mine v11 patch is working as per expectation.
0002 cosmetic code changes not in pg_restore.c
0003 cosmetic code changes in pg_restore.c0004 fully implement pg_restore --exclude-database=PATTERN
similar to pg_dumpall.c
declare two file static variables:
static SimpleStringList database_exclude_names = {NULL, NULL};
static SimpleStringList db_exclude_patterns = {NULL, NULL};
I also deleted the function is_full_pattern.I use
$BIN10/pg_restore --exclude-database=*x* --exclude-database=*s*
--exclude-database=*t* --verbose --file=test.sql x1.dump
the verbose message to verify my changes.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Thu, Jan 23, 2025 at 6:35 PM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:
On Thu, 23 Jan 2025 at 14:59, jian he <jian.universality@gmail.com> wrote:
hi.
The four patches attached are to solve the
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
it is based on your v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch0001. pg_dumpall --exclude-database=PATTERN already works,
main function resolve pattern matching is expand_dbname_patterns.
make it an extern function, so pg_restore --exclude-database can also use it.Hi Jian,
We can't use the same expand_dbname_patterns function pg_restore.In the 1st patch, by mistake I also used this function but then I
realised that we should not use this function due to some limitation
for pg_restore.While doing pg_dumpall, we have all the existence database names in
the pg_database catalog but while restoring, we don't have all
databases in the catalog.
Actually, we will read dbnames from map.dat file to skip matching
patterns for restore.
hi.
After some tests and thinking about your reply, I admit that using
expand_dbname_patterns
in pg_restore will not work.
We need to do pattern matching against the map.dat file.
Please check the attached v12 series based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch
v12-0001 cosmetic change.
v12-0002 implement pg_resore --exclude-database=PATTERN.
main gist of implementation:
for each database name in map.dat file,
check if this database name pattern matches with PATTERN or not.
pattern matching is using processSQLNamePattern.
your substring will not work.
some of the test cases.
$BIN10/pg_restore --exclude-database=* -Cd template1 --verbose dir10 >
dir_format 2>&1
$BIN10/pg_restore --exclude-database=*x* -Cd template1 --verbose dir10
dir_format 2>&1
$BIN10/pg_restore --exclude-database=?* -Cd template1 --verbose dir10
Show quoted text
dir_format 2>&1
Attachments:
v12-0002-pg_restore-exclude-database-PATTERN.no-cfbotapplication/octet-stream; name=v12-0002-pg_restore-exclude-database-PATTERN.no-cfbotDownload
From 7e51aa6493edfcb7b3aafa8bce66642509e0cdec Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Fri, 24 Jan 2025 22:49:47 +0800
Subject: [PATCH v12 2/2] pg_restore --exclude-database=PATTERN
---
src/bin/pg_dump/common_dumpall_restore.c | 49 +++++++
src/bin/pg_dump/common_dumpall_restore.h | 3 +-
src/bin/pg_dump/pg_restore.c | 164 +++++++++++------------
3 files changed, 133 insertions(+), 83 deletions(-)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
index ace5077085..fd966c42a0 100644
--- a/src/bin/pg_dump/common_dumpall_restore.c
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -312,3 +312,52 @@ parseDumpFormat(const char *format)
return archDumpFormat;
}
+
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr -
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index aef7abdf4f..d8893befca 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -16,10 +16,11 @@
#include "pg_backup.h"
-extern PGconn *v(const char *dbname, const char *connection_string, const char *pghost,
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
const char *pgport, const char *pguser,
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version);
extern PGresult *executeQuery(PGconn *conn, const char *query);
extern ArchiveFormat parseDumpFormat(const char *format);
+extern char *quote_literal_cstr(const char *s);
#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index a715448dce..cef8af652c 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -81,22 +81,27 @@ static bool restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+ RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
- const char *outfile);
-static int filter_dbnames_for_restore(PGconn *conn,
- SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+ const char *outfile);
+
+static void expand_db_pattern_restore(PGconn *conn,
+ SimpleStringList *patterns,
+ SimpleStringList *names,
+ SimpleDatabaseOidList *dboid_list);
+static int filter_dbnames_for_restore(SimpleDatabaseOidList *dbname_oid_list);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
const char *dbname);
-static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
static void simple_string_full_list_delete(SimpleStringList *list);
static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
SimpleDatabaseOidListCell *cell,
SimpleDatabaseOidListCell *prev);
+static SimpleStringList db_exclude_patterns = {NULL, NULL};
+static SimpleStringList database_exclude_names = {NULL, NULL};
int
main(int argc, char **argv)
{
@@ -118,7 +123,6 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
- SimpleStringList db_exclude_patterns = {NULL, NULL};
bool globals_only = false;
struct option cmdopts[] = {
@@ -491,8 +495,7 @@ main(int argc, char **argv)
else
{
/* Now restore all the databases from map.dat file. */
- exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
- opts, numWorkers);
+ exit_code = restoreAllDatabases(conn, inputFileSpec, opts, numWorkers);
}
/* Free db pattern list. */
@@ -795,17 +798,75 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
return 'Q';
}
+
+/*
+ * Find a list of database names that match the given patterns,
+ * the results is stored in *names*.
+ */
+static void
+expand_db_pattern_restore(PGconn *conn,
+ SimpleStringList *patterns,
+ SimpleStringList *names,
+ SimpleDatabaseOidList *dboid_list)
+{
+ PQExpBuffer query;
+ PGresult *res;
+ SimpleDatabaseOidListCell *dboid_cell = NULL;
+
+ if (patterns->head == NULL)
+ return;
+
+ query = createPQExpBuffer();
+
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the map.dat
+ * file located in the backup directory.
+ * that's why we need quote_literal_cstr.
+ */
+ for (dboid_cell = dboid_list->head; dboid_cell; dboid_cell = dboid_cell->next)
+ {
+ for (SimpleStringListCell *cell = patterns->head; cell; cell = cell->next)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, cell->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ cell->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ simple_string_list_append(names, dboid_cell->db_name);
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+ }
+ destroyPQExpBuffer(query);
+}
+
/*
* filter_dbnames_for_restore
*
- * This will remove names from all dblist that are given with exclude-database
- * option.
+ * This will remove names from all dblist that are same as
+ * database_exclude_names list element
*
* returns number of dbnames those will be restored.
*/
static int
-filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
- SimpleStringList db_exclude_patterns)
+filter_dbnames_for_restore(SimpleDatabaseOidList *dbname_oid_list)
{
SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
SimpleDatabaseOidListCell *dboidprecell = NULL;
@@ -821,18 +882,13 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
bool skip_db_restore = false;
SimpleDatabaseOidListCell *next = dboid_cell->next;
- /* Now match this dbname with exclude-database list. */
- for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ for (SimpleStringListCell *celldb = database_exclude_names.head; celldb; celldb = celldb->next)
{
- if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
- (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
{
/*
* As we need to skip this dbname so set flag to remove it from
* list.
- *
- * Note: we can't remove this pattern from skip list as we
- * might have multiple database name with this same pattern.
*/
skip_db_restore = true;
break;
@@ -937,8 +993,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
*/
static int
restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts,
- int numWorkers)
+ RestoreOptions *opts, int numWorkers)
{
SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
SimpleDatabaseOidListCell *dboid_cell;
@@ -977,13 +1032,13 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
}
/*
- * TODO: To skip databases, we need to make a design.
- *
- * Skip any explicitly excluded database. If there is no database
- * connection, then just consider pattern as simple name to compare.
+ * processing pg_retsore --exclude-database=PATTERN.
*/
- num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
- db_exclude_patterns);
+ expand_db_pattern_restore(conn,
+ &db_exclude_patterns,
+ &database_exclude_names,
+ &dbname_oid_list);
+ num_db_restore = filter_dbnames_for_restore(&dbname_oid_list);
/* Close the db connection as we are done globals and patterns. */
if (conn)
@@ -1218,58 +1273,3 @@ simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell
pfree(cell);
}
}
-
-/*
- * is_full_pattern
- *
- * This uses substring function to make 1st string from pattern.
- * Outstring of substring function is compared with 1st string to
- * validate this pattern.
- *
- * Returns true if 1st string can be constructed from given pattern.
- *
- */
-static bool
-is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
-{
- PQExpBuffer query;
- PGresult *result;
-
- query = createPQExpBuffer();
-
- printfPQExpBuffer(query,
- "SELECT substring ( "
- " '%s' , "
- " '%s' ) ", str, ptrn);
-
- result = executeQuery(conn, query->data);
-
- if (PQresultStatus(result) == PGRES_TUPLES_OK)
- {
- if (PQntuples(result) == 1)
- {
- const char *outstr = NULL;
-
- /*
- * If output string of substring function is matches with str, then
- * we can construct str from pattern.
- */
- if (!PQgetisnull(result, 0, 0))
- outstr = PQgetvalue(result, 0, 0);
-
- if (outstr && pg_strcasecmp(outstr, str) == 0)
- {
- PQclear(result);
- destroyPQExpBuffer(query);
- return true;
- }
- }
- }
- else
- pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
-
- PQclear(result);
- destroyPQExpBuffer(query);
-
- return false;
-}
--
2.34.1
v12-0001-coesmetic-change.no-cfbotapplication/octet-stream; name=v12-0001-coesmetic-change.no-cfbotDownload
From 455d745ed7e6df09c3ada22c34bc5a888ef02304 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Fri, 24 Jan 2025 09:49:55 +0800
Subject: [PATCH v12 1/2] coesmetic change.
---
src/bin/pg_dump/common_dumpall_restore.h | 9 +++---
src/bin/pg_dump/pg_dumpall.c | 2 +-
src/bin/pg_dump/pg_restore.c | 39 ++++++++++++------------
3 files changed, 25 insertions(+), 25 deletions(-)
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index a27c3e9fb8..aef7abdf4f 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -16,11 +16,10 @@
#include "pg_backup.h"
-extern PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error,
- const char *progname, const char **connstr, int *server_version);
+extern PGconn *v(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
extern PGresult *executeQuery(PGconn *conn, const char *query);
extern ArchiveFormat parseDumpFormat(const char *format);
#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 5915b1b051..5adeeb6d4d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -673,7 +673,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
- " plain text (default))\n"));
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 79f61395ae..a715448dce 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -77,8 +77,8 @@ simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *d
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool IsFileExistsInDirectory(const char *dir, const char *filename);
-static bool restoreOneDatabase(const char *inputFileSpec,
- RestoreOptions *opts, int numWorkers, bool append_data);
+static bool restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
@@ -87,14 +87,15 @@ static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
- SimpleDatabaseOidList *dbname_oid_list);
+ SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname);
+ const char *dbname);
static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
static void simple_string_full_list_delete(SimpleStringList *list);
static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
@@ -446,8 +447,8 @@ main(int argc, char **argv)
/* Plain format is not supported for pg_restore. */
if (opts->format == archNull)
{
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
+ pg_fatal("unrecognized archive format \"%s\", one of \"c\", \"d\" or \"t\" must be specified",
+ opts->formatName);
}
}
@@ -458,7 +459,7 @@ main(int argc, char **argv)
* --exclude-database patterns.
*/
if (inputFileSpec != NULL &&
- !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
{
/* If global.dat is exist, then process it. */
if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
@@ -470,8 +471,8 @@ main(int argc, char **argv)
if (opts->cparams.dbname)
{
conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
@@ -498,8 +499,8 @@ main(int argc, char **argv)
simple_string_full_list_delete(&db_exclude_patterns);
return exit_code;
- }/* end if */
- }/* end if */
+ }
+ }
return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
}
@@ -511,7 +512,7 @@ main(int argc, char **argv)
*/
static bool
restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data)
+ int numWorkers, bool append_data)
{
Archive *AH;
bool exit_code;
@@ -955,20 +956,20 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (!conn)
{
- pg_log_info("trying to connect postgres database to dump into out file");
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
/* Try with template1. */
if (!conn)
{
- pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
conn = connectDatabase("template1", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
- progname, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
if (!conn)
pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
--
2.34.1
hi.
attached patching trying to refactor ReadOneStatement
for properly handling the single and double quotes.
the commit message also has some tests on it.
it is based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch.
Attachments:
v11-0001-refactoring-ReadOneStatement.no-cfbotapplication/octet-stream; name=v11-0001-refactoring-ReadOneStatement.no-cfbotDownload
From 11c1f38413415f582c0596ff745e4cbfb1848126 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Sun, 26 Jan 2025 22:36:28 +0800
Subject: [PATCH v11 1/1] refactoring ReadOneStatement escape single quote and
double quote for ReadOneStatement.
the following are some tests to prove it accurate.
create role y;
comment on role y is $$;"'$$;
create role "'";
create role "'';";
create role "';;;';";
create role z;
comment on role z is $$"";';$$;
create role ";
';
;
";
create role ";";
comment on role ";" is $$;"';',",' $$;
comment on role ";" is $$;"';',",'
$$;
create role "
";
comment on role z is $$;"';',",'';;'"";$$;
---
src/bin/pg_dump/pg_restore.c | 49 ++++++++++++++++++++++++++++++------
1 file changed, 41 insertions(+), 8 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 79f61395ae..65029ecb76 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -764,6 +764,10 @@ static int
ReadOneStatement(StringInfo inBuf, FILE *pfile)
{
int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
resetStringInfo(inBuf);
@@ -772,16 +776,44 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
*/
while ((c = fgetc(pfile)) != EOF)
{
- appendStringInfoChar(inBuf, (char) c);
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if( c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ break;
+ }
if (c == '\n')
- {
- if(inBuf->len > 1 &&
- inBuf->data[inBuf->len - 2] == ';')
- break;
- else
- continue;
- }
+ appendStringInfoChar(inBuf, (char) c);
}
/* No input before EOF signal means time to quit. */
@@ -1113,6 +1145,7 @@ execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
/* Process file till EOF and execute sql statements */
while (ReadOneStatement(&sqlstatement, pfile) != EOF)
{
+ pg_log_info("executing %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
switch (PQresultStatus(result))
--
2.34.1
hi.
After some tests and thinking about your reply, I admit that using
expand_dbname_patterns
in pg_restore will not work.
We need to do pattern matching against the map.dat file.
Please check the attached v12 series based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patchv12-0001 cosmetic change.
v12-0002 implement pg_resore --exclude-database=PATTERN.
main gist of implementation:
for each database name in map.dat file,
check if this database name pattern matches with PATTERN or not.
pattern matching is using processSQLNamePattern.your substring will not work.
some of the test cases.
$BIN10/pg_restore --exclude-database=* -Cd template1 --verbose dir10 >
dir_format 2>&1
Hi,
As per discussion with Robert Haas and Dilip Kumar, we thought that we
can't assume that
there will be a db connection every time while doing pg_restore but in
attached patch, we are
assuming that we have a db connection.
In my previous updates, I already mentioned this problem. I think, we
should not use connection
for --exclude-database, rather we should use direct functions to
validate patterns or we should
restrict as NAME only.
On Sun, 26 Jan 2025 at 20:17, jian he <jian.universality@gmail.com> wrote:
hi.
attached patching trying to refactor ReadOneStatement
for properly handling the single and double quotes.
the commit message also has some tests on it.it is based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch.
I think, instead of char, if we read line by line, then we don't need
that much code and need not to worry about double quotes.
In the next version, I will merge some patches and will change it to
read line by line.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Hi mahendra,
I have reviewed the code in the v11 patch and it looks good to me.
But in common_dumpall_restore.c there's parseDumpFormat which is common
between pg_dumpall and pg_restore ,as per the discussion in [1]/messages/by-id/CAFC+b6pfK-BGcWW1kQmtxVrCh-JGjB2X02rLPQs_ZFaDGjZDsQ@mail.gmail.com thread i
don't think we should create a common api ,as discussed in the thread there
might chances in the future we might decide that some format is obsolete
and desupport it in pg_dumpall ,while support in pg_restore for
compatibility reasons.
[1]: /messages/by-id/CAFC+b6pfK-BGcWW1kQmtxVrCh-JGjB2X02rLPQs_ZFaDGjZDsQ@mail.gmail.com
/messages/by-id/CAFC+b6pfK-BGcWW1kQmtxVrCh-JGjB2X02rLPQs_ZFaDGjZDsQ@mail.gmail.com
Regards,
Srinath Reddy Sadipiralla,
EDB: http://www.enterprisedb.com
make check-world fails,i think we don't need $port and $filename instead we
can use something like 'xxx'.so fixed it in the below patch.
Regards,
Srinath Reddy Sadipiralla,
EDB: http://www.enterprisedb.com
Attachments:
v1-fix-make-check-world.patchapplication/octet-stream; name=v1-fix-make-check-world.patchDownload
From 7666c6d2321b696c5b4a7dd4c4c80f7b12b22813 Mon Sep 17 00:00:00 2001
From: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Date: Tue, 28 Jan 2025 11:41:14 +0530
Subject: [PATCH 1/1] Refactor command_fails_like in pg_dump/t/001_basic.pl
to pass make check-world.
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 ++-
doc/src/sgml/ref/pg_restore.sgml | 29 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 314 ++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 516 +++++++----------
src/bin/pg_dump/pg_restore.c | 701 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 11 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1373 insertions(+), 340 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 014f279258..8ca49a6597 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719..ba2913b335 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -166,6 +166,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +334,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca7..a4e557d62c 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 0000000000..ace5077085
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,314 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * this is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/* ----------
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ * ----------
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 0000000000..a27c3e9fb8
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname,
+ const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern ArchiveFormat parseDumpFormat(const char *format);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf..ddecac5cf0 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b2..65000e5a08 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844..7153d4a40b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd..d94d0de2a5 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f515..f70ea9233f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index af857f00c7..2de2621c8f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c..5915b1b051 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,24 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +107,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +121,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +145,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +187,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +238,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +266,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +417,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +478,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +513,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +522,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +544,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +642,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +655,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +672,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1524,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1544,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1552,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1593,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1623,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1642,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1675,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1685,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1764,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1859,58 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 88ae39d938..79f61395ae 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,67 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static bool is_full_pattern(PGconn *conn, const char *str, const char *ptrn);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell, SimpleDatabaseOidListCell *prev);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +117,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
+ bool globals_only = false;
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +171,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +200,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +227,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +349,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +380,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -383,28 +441,80 @@ main(int argc, char **argv)
if (opts->formatName)
{
- switch (opts->formatName[0])
+ opts->format = parseDumpFormat(opts->formatName);
+
+ /* Plain format is not supported for pg_restore. */
+ if (opts->format == archNull)
{
- case 'c':
- case 'C':
- opts->format = archCustom;
- break;
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
+ opts->formatName);
+ }
+ }
- case 'd':
- case 'D':
- opts->format = archDirectory;
- break;
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
- case 't':
- case 'T':
- opts->format = archTar;
- break;
+ /* Connect to database to execute global sql commands from global.dat file. */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
- default:
- pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", or \"t\"",
- opts->formatName);
- }
- }
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }/* end if */
+ }/* end if */
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static bool
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ bool exit_code;
AH = OpenArchive(inputFileSpec, opts->format);
@@ -431,11 +541,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -471,6 +581,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -483,6 +594,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -621,3 +733,542 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist that are given with exclude-database
+ * option.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ /* Process one by one all dbnames and if needs to skip restoring. */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ /* Now match this dbname with exclude-database list. */
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ if ((conn && is_full_pattern(conn, dboid_cell->db_name, celldb->val)) ||
+ (!conn && pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ *
+ * Note: we can't remove this pattern from skip list as we
+ * might have multiple database name with this same pattern.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("map.dat file is not present in dump of pg_dumpall, so nothing to restore.");
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect postgres database to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect template1 database as failed to connect to postgres to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, false,
+ progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * TODO: To skip databases, we need to make a design.
+ *
+ * Skip any explicitly excluded database. If there is no database
+ * connection, then just consider pattern as simple name to compare.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+
+ ofile = fopen(out_file_path, PG_BINARY_W);
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list, SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * is_full_pattern
+ *
+ * This uses substring function to make 1st string from pattern.
+ * Outstring of substring function is compared with 1st string to
+ * validate this pattern.
+ *
+ * Returns true if 1st string can be constructed from given pattern.
+ *
+ */
+static bool
+is_full_pattern(PGconn *conn, const char *str, const char *ptrn)
+{
+ PQExpBuffer query;
+ PGresult *result;
+
+ query = createPQExpBuffer();
+
+ printfPQExpBuffer(query,
+ "SELECT substring ( "
+ " '%s' , "
+ " '%s' ) ", str, ptrn);
+
+ result = executeQuery(conn, query->data);
+
+ if (PQresultStatus(result) == PGRES_TUPLES_OK)
+ {
+ if (PQntuples(result) == 1)
+ {
+ const char *outstr = NULL;
+
+ /*
+ * If output string of substring function is matches with str, then
+ * we can construct str from pattern.
+ */
+ if (!PQgetisnull(result, 0, 0))
+ outstr = PQgetvalue(result, 0, 0);
+
+ if (outstr && pg_strcasecmp(outstr, str) == 0)
+ {
+ PQclear(result);
+ destroyPQExpBuffer(query);
+ return true;
+ }
+ }
+ }
+ else
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), query->data);
+
+ PQclear(result);
+ destroyPQExpBuffer(query);
+
+ return false;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae..20ddf3646a
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,13 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '-p', 'xxx', '-f', 'xxx',
+ "--exclude-database=grabadge",
+ '--globals-only' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +233,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2644a2e65..96f460d2e3 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2673,6 +2673,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.43.0
On Tue, 28 Jan 2025 at 10:19, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi mahendra,
I have reviewed the code in the v11 patch and it looks good to me.
But in common_dumpall_restore.c there's parseDumpFormat which is common between pg_dumpall and pg_restore ,as per the discussion in [1] thread i don't think we should create a common api ,as discussed in the thread there might chances in the future we might decide that some format is obsolete and desupport it in pg_dumpall ,while support in pg_restore for compatibility reasons.
Oaky. Thanks for review. I will make changes as per discussion in
another thread.
On Tue, 28 Jan 2025 at 11:52, Srinath Reddy <srinath2133@gmail.com> wrote:
make check-world fails,i think we don't need $port and $filename instead we can use something like 'xxx'.so fixed it in the below patch.
In offline discussion, Andew already reported this test case. I will
fix this in the next version.
Regards,
Srinath Reddy Sadipiralla,
EDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Fri, 24 Jan 2025 at 20:50, jian he <jian.universality@gmail.com> wrote:
On Thu, Jan 23, 2025 at 6:35 PM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:hi.
After some tests and thinking about your reply, I admit that using
expand_dbname_patterns
in pg_restore will not work.
We need to do pattern matching against the map.dat file.
Please check the attached v12 series based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patchv12-0001 cosmetic change.
v12-0002 implement pg_resore --exclude-database=PATTERN.
main gist of implementation:
for each database name in map.dat file,
check if this database name pattern matches with PATTERN or not.
pattern matching is using processSQLNamePattern.your substring will not work.
some of the test cases.
$BIN10/pg_restore --exclude-database=* -Cd template1 --verbose dir10 >
dir_format 2>&1
$BIN10/pg_restore --exclude-database=*x* -Cd template1 --verbose dir10dir_format 2>&1
$BIN10/pg_restore --exclude-database=?* -Cd template1 --verbose dir10
dir_format 2>&1
I merged v12_0001 into the latest patch. There was one bug in v12_001*
which was fixed in v12_0002*.
-extern PGconn **connectDatabase*(const char *dbname,
- const char *connection_string, const char *pghost, - const char *pgport, const char *pguser, - trivalue prompt_password, bool fail_on_error, - const char *progname, const char **connstr, int *server_version); +extern PGconn *v(const char *dbname, const char *connection_string, const char *pghost,
As per v12_0002*, I made some changes into the current patch to avoid using
the substring function.
On Tue, 28 Jan 2025 at 11:57, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Tue, 28 Jan 2025 at 10:19, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi mahendra,
I have reviewed the code in the v11 patch and it looks good to me.
But in common_dumpall_restore.c there's parseDumpFormat which is
common between pg_dumpall and pg_restore ,as per the discussion in [1]
thread i don't think we should create a common api ,as discussed in the
thread there might chances in the future we might decide that some format
is obsolete and desupport it in pg_dumpall ,while support in pg_restore for
compatibility reasons.
Fixed. In the latest patch, I removed the parseDumpFormat function.
In older versions, I was using the same function for pg_dumpall and
pg_restore but now some common code is already committed from this patch
and as per discussion, we will keep separate handling for parsing so adding
parseDumpFormat function only in pg_dumpall.c file.
On Sun, 26 Jan 2025 at 20:17, jian he <jian.universality@gmail.com> wrote:
hi.
attached patching trying to refactor ReadOneStatement
for properly handling the single and double quotes.
the commit message also has some tests on it.it is based on your
v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch.
Okay. I am doing some more testing and code review for this type of test
cases. I will merge this delta into the next version.
Oaky. Thanks for review. I will make changes as per discussion in
another thread.On Tue, 28 Jan 2025 at 11:52, Srinath Reddy <srinath2133@gmail.com> wrote:
make check-world fails,i think we don't need $port and $filename
instead we can use something like 'xxx'.so fixed it in the below patch.
In offline discussion, Andew already reported this test case. I will
fix this in the next version.
Fixed.
Thanks Jian and Srinath for the testing and review.
Here, I am attaching an updated patch for review and testing.
I merged some of the delta patches that are shared by Jian and did some
fixes also.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v13_pg_dumpall-with-directory-tar-custom-format-28-jan.patchapplication/octet-stream; name=v13_pg_dumpall-with-directory-tar-custom-format-28-jan.patchDownload
From e422aa60c99c1fc44578741594c52c30359b112f Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 28 Jan 2025 03:33:51 -0800
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat and map.dat to restore all databases. If both files are exists in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list.
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
:q
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 +++-
doc/src/sgml/ref/pg_restore.sgml | 29 ++
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 281 ++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 24 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 550 +++++++++++------------
src/bin/pg_dump/pg_restore.c | 728 ++++++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 11 +-
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1416 insertions(+), 323 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2..4298083 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+<varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1..ba2913b 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -167,6 +167,25 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
<listitem>
@@ -316,6 +335,16 @@ PostgreSQL documentation
</varlistentry>
<varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15..a4e557d 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 0000000..446f325
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,281 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * this is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/* ----------
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ * ----------
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 0000000..7fe1c00
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6c..ddecac5 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb..65000e5 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc..7153d4a 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -333,7 +333,7 @@ ProcessArchiveRestoreOptions(Archive *AHX)
/* Public */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +450,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1263,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1279,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1658,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1679,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b4..d94d0de 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f..f70ea92 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -21,7 +21,8 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
static struct
{
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 02e1fdf..61067e1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f797..3e022ec 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -461,6 +480,33 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
/*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
+ /*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
* "template1".
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -497,19 +546,6 @@ main(int argc, char *argv[])
&database_exclude_names);
/*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
- /*
* Set the client encoding if requested.
*/
if (dumpencoding)
@@ -607,7 +643,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +656,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -637,6 +673,8 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1525,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1545,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1553,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1594,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1624,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1643,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1676,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1686,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1650,256 +1766,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
}
/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
-/*
* As above for a SQL command (which returns nothing).
*/
static void
@@ -1994,3 +1860,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272..fc248a4 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,69 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +119,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +173,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +351,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +382,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +464,76 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns, opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +559,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -469,6 +599,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +612,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -619,3 +751,585 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(inBuf, (char) c);
+
+ if (c == '\n')
+ {
+ if(inBuf->len > 1 &&
+ inBuf->data[inBuf->len - 2] == ';')
+ break;
+ else
+ continue;
+ }
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If we don't have db connection, then consider patterns as NAME
+ * only.
+ */
+ if (!conn && (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ {
+ /*
+ * As we need to skip this dbname so set flag to remove it from
+ * list.
+ */
+ skip_db_restore = true;
+ break;
+ }
+ else
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("\"%s\" database is matching with exclude \"%s\" pattern", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+
+ if (skip_db_restore)
+ break;
+ }
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("map.dat file is not present in dump of pg_dumpall, so nothing to restore.");
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > 100)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ } /* end while */
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+
+ ofile = fopen(out_file_path, PG_BINARY_W);
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat inot outfile. */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ fputc(c, ofile);
+ }
+
+ fclose(pfile);
+ fclose(ofile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr -
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f..ebfd07b
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,11 +219,20 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
- [ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
+ [ 'pg_dumpall', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
qr/\Qpg_dumpall: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2644a2..96f460d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2673,6 +2673,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
1.8.3.1
hi.
we need to escape the semicolon within the single quotes or double quotes.
I think my patch in [1]/messages/by-id/CACJufxEQUcjBocKJQ0Amf3AfiS9wFB7zYSHrj1qqD_oWeaJoGQ@mail.gmail.com is correct.
we can have "ERROR: role "z" already exists
but
error message like
pg_restore: error: could not execute query: "ERROR: unterminated
quoted string at or near "';
should not be accepted in execute_global_sql_commands, ReadOneStatement, PQexec
attached is the all the corner test case i come up with against
ReadOneStatement.
your v13 will generate errors like "ERROR: unterminated quoted string
at or near ..."',
which is not good, i think.
[1]: /messages/by-id/CACJufxEQUcjBocKJQ0Amf3AfiS9wFB7zYSHrj1qqD_oWeaJoGQ@mail.gmail.com
Attachments:
hi.
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
Can we spare some words to explain the purpose of append_data.
in get_dbname_oid_list_from_mfile
pg_log_info("map.dat file is not present in dump of
pg_dumpall, so nothing to restore.");
maybe we can change it to
pg_log_info("databases restoring is skipped as map.dat file is
not present in \"%s\"", dumpdirpath);
we can aslo add Assert(dumpdirpath != NULL)
pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file
while restoring", dbname, db_oid);
also need to change. maybe
pg_log_info("found database \"%s\" (OID: %u) in map.dat file while
restoring.", dbname, db_oid);
I also did some minor refactoring, please check attached.
doc/src/sgml/ref/pg_restore.sgml
<refnamediv>
<refname>pg_restore</refname>
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
</refpurpose>
</refnamediv>
need to change, since now we can restore multiple databases.
doc/src/sgml/ref/pg_dumpall.sgml
<refnamediv>
<refname>pg_dumpall</refname>
<refpurpose>extract a <productname>PostgreSQL</productname> database
cluster into a script file</refpurpose>
</refnamediv>
also need change.
Attachments:
v13-0001-minor-coesmetic-change-based-on-v13.no-cfbotapplication/octet-stream; name=v13-0001-minor-coesmetic-change-based-on-v13.no-cfbotDownload
From cae95a1db4caf35e68697383ab0416dc86173c38 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Fri, 31 Jan 2025 11:51:30 +0800
Subject: [PATCH v13 1/1] minor coesmetic change based on v13
---
doc/src/sgml/ref/pg_dumpall.sgml | 2 +-
src/bin/pg_dump/common_dumpall_restore.h | 2 ++
src/bin/pg_dump/pg_backup_utils.c | 4 +--
src/bin/pg_dump/pg_dumpall.c | 2 +-
src/bin/pg_dump/pg_restore.c | 46 ++++++++++++++----------
5 files changed, 32 insertions(+), 24 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 429808373b..508d5ac57a 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -126,7 +126,7 @@ PostgreSQL documentation
</listitem>
</varlistentry>
-<varlistentry>
+ <varlistentry>
<term><option>-F <replaceable class="parameter">format</replaceable></option></term>
<term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index 7fe1c00ab7..a0dcdbe080 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -16,6 +16,8 @@
#include "pg_backup.h"
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
const char *pgport, const char *pguser,
trivalue prompt_password, bool fail_on_error,
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index f70ea9233f..47589cca90 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,9 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
-#define MAX_ON_EXIT_NICELY 100
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 3e022ecdeb..09b61dab00 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -674,7 +674,7 @@ help(void)
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
- " plain text (default))\n"));
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index fc248a441e..10d1553e48 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -77,15 +77,16 @@ simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *d
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool IsFileExistsInDirectory(const char *dir, const char *filename);
-static int restoreOneDatabase(const char *inputFileSpec,
- RestoreOptions *opts, int numWorkers, bool append_data);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
const char *outfile);
static int filter_dbnames_for_restore(PGconn *conn,
- SimpleDatabaseOidList *dbname_oid_list, SimpleStringList db_exclude_patterns);
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
@@ -510,7 +511,9 @@ main(int argc, char **argv)
{
/* Now restore all the databases from map.dat file. */
exit_code = restoreAllDatabases(conn, inputFileSpec,
- db_exclude_patterns, opts, numWorkers);
+ db_exclude_patterns,
+ opts,
+ numWorkers);
}
/* Free db pattern list. */
@@ -821,8 +824,9 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
* returns number of dbnames those will be restored.
*/
static int
-filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
- SimpleStringList db_exclude_patterns)
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
{
SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
SimpleDatabaseOidListCell *dboidprecell = NULL;
@@ -833,6 +837,7 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
/* Return 0 if there is no db to restore. */
if (dboid_cell == NULL)
return 0;
+ Assert(conn);
query = createPQExpBuffer();
@@ -860,7 +865,7 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
* If we don't have db connection, then consider patterns as NAME
* only.
*/
- if (!conn && (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0))
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
{
/*
* As we need to skip this dbname so set flag to remove it from
@@ -875,13 +880,13 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
appendPQExpBufferStr(query, "SELECT 1 ");
processSQLNamePattern(conn, query, celldb->val, false,
- false, NULL, quote_literal_cstr(dboid_cell->db_name),
- NULL, NULL, NULL, &dotcnt);
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
if (dotcnt > 0)
{
pg_log_error("improper qualified name (too many dotted names): %s",
- celldb->val);
+ celldb->val);
PQfinish(conn);
exit_nicely(1);
}
@@ -891,7 +896,7 @@ filter_dbnames_for_restore(PGconn *conn, SimpleDatabaseOidList *dbname_oid_list,
if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
{
skip_db_restore = true;
- pg_log_info("\"%s\" database is matching with exclude \"%s\" pattern", dboid_cell->db_name, celldb->val);
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
}
PQclear(res);
@@ -1000,13 +1005,13 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
*/
static int
restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts,
- int numWorkers)
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
{
SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
SimpleDatabaseOidListCell *dboid_cell;
int exit_code = 0;
- int num_db_restore;
+ int num_db_restore = 0;
int num_total_db;
num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
@@ -1042,8 +1047,9 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
/*
* processing pg_retsore --exclude-database=PATTERN.
*/
- num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
- db_exclude_patterns);
+ if (conn)
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
/* Close the db connection as we are done globals and patterns. */
if (conn)
@@ -1063,10 +1069,12 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
/* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
- if (num_db_restore > 100)
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
{
simple_db_oid_full_list_delete(&dbname_oid_list);
- pg_fatal("cound not restore more than 100 databases by single pg_restore, here total db:%d", num_db_restore);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total db:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
}
/*
@@ -1102,7 +1110,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
exit_code = dbexit_code;
dboid_cell = dboid_cell->next;
- } /* end while */
+ }
/* Log number of processed databases.*/
pg_log_info("number of restored databases are %d", num_db_restore);
--
2.34.1
hi.
more small issues.
+ count_db++; /* Increment db couter. */
+ dboidprecell = dboid_cell;
+ }
+
typo, "couter" should be "counter".
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and correspoding db_oid.
+ *
typo, "correspoding" should be "corresponding".
execute_global_sql_commands comments didn't mention ``IF (outfile) ``
branch related code.
We can add some comments saying that
""IF opts->filename is not specified, then copy the content of
global.dat to opts->filename""".
or split it into two functions.
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1];
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file
while restoring", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
db_oid first should be set to 0, dbname first character first should be set to 0
(char dbname[0] = '\0') before sscanf call.
so if sscanf fail, the db_oid and dbname value is not undermined)
Hi,
i think we have to change the pg_dumpall "--help" message similar to
pg_dump's specifying that now pg_dumpall dumps cluster into to other
non-text formats.
Need similar "--help" message change in pg_restore to specify that now
pg_restore supports restoring whole cluster from archive created from
pg_dumpall.
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 3e022ecdeb..728abe841c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -667,7 +667,7 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script
file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster into an SQL script
file or to other formats.\n\n"), progname);
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index fc248a441e..c4e58c1f3b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -582,6 +582,8 @@ static void
usage(const char *progname)
{
printf(_("%s restores a PostgreSQL database from an archive created by
pg_dump.\n\n"), progname);
+ printf(_("[or]\n"));
+ printf(_("%s restores a PostgreSQL entire cluster from an archive created
by pg_dumpall.\n\n"), progname); Regards, Srinath Reddy Sadipiralla, EDB:
https://www.enterprisedb.com <http://www.enterprisedb.com/>
Thanks Jian for review, testing and delta patches.
On Wed, 29 Jan 2025 at 15:09, jian he <jian.universality@gmail.com> wrote:
hi.
we need to escape the semicolon within the single quotes or double quotes.
I think my patch in [1] is correct.we can have "ERROR: role "z" already exists
but
error message like
pg_restore: error: could not execute query: "ERROR: unterminated
quoted string at or near "';
should not be accepted in execute_global_sql_commands, ReadOneStatement,
PQexec
attached is the all the corner test case i come up with against
ReadOneStatement.
your v13 will generate errors like "ERROR: unterminated quoted string
at or near ..."',
which is not good, i think.[1]
/messages/by-id/CACJufxEQUcjBocKJQ0Amf3AfiS9wFB7zYSHrj1qqD_oWeaJoGQ@mail.gmail.com
Yes, you are right. We can't read line by line. We should read char by char
and we need some extra handling for double quote names.
I have merged your delta patch into this and now I am doing some more
testing for corner cases of this type of names.
*Ex*: add some comments in names etc or multiple semicolons or other
special characters in name.
On Fri, 31 Jan 2025 at 09:23, jian he <jian.universality@gmail.com> wrote:
hi.
-extern void RestoreArchive(Archive *AHX); +extern void RestoreArchive(Archive *AHX, bool append_data); Can we spare some words to explain the purpose of append_data.
Fixed. I added some comments on the top of the RestoreArchive function.
in get_dbname_oid_list_from_mfile
pg_log_info("map.dat file is not present in dump of
pg_dumpall, so nothing to restore.");
maybe we can change it to
pg_log_info("databases restoring is skipped as map.dat file is
not present in \"%s\"", dumpdirpath);
Fixed.
we can aslo add Assert(dumpdirpath != NULL)
No, we don't need it as we are already checking inputfileSpec!= NULL.
pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file
while restoring", dbname, db_oid);
also need to change. maybe
pg_log_info("found database \"%s\" (OID: %u) in map.dat file while
restoring.", dbname, db_oid);
Fixed.
I also did some minor refactoring, please check attached.
Thanks. I merged it.
doc/src/sgml/ref/pg_restore.sgml
<refnamediv>
<refname>pg_restore</refname><refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
</refpurpose>
</refnamediv>
need to change, since now we can restore multiple databases.
Agreed. I added some comments.
doc/src/sgml/ref/pg_dumpall.sgml
<refnamediv>
<refname>pg_dumpall</refname>
<refpurpose>extract a <productname>PostgreSQL</productname> database
cluster into a script file</refpurpose>
</refnamediv>
also need change.
On Sat, 1 Feb 2025 at 21:36, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi,
i think we have to change the pg_dumpall "--help" message similar to
pg_dump's specifying that now pg_dumpall dumps cluster into to other
non-text formats.
Need similar "--help" message change in pg_restore to specify that now
pg_restore supports restoring whole cluster from archive created from
pg_dumpall.
As Jian suggested, we need to change docs so I did the same changes into
doc and --help also.
On Fri, 31 Jan 2025 at 14:22, jian he <jian.universality@gmail.com> wrote:
hi.
more small issues.+ count_db++; /* Increment db couter. */ + dboidprecell = dboid_cell; + } + typo, "couter" should be "counter".
Fixed.
+ +/* + * get_dbname_oid_list_from_mfile + * + * Open map.dat file and read line by line and then prepare a list of
database
+ * names and correspoding db_oid. + * typo, "correspoding" should be "corresponding".
Fixed.
execute_global_sql_commands comments didn't mention ``IF (outfile) ``
branch related code.
We can add some comments saying that
""IF opts->filename is not specified, then copy the content of
global.dat to opts->filename""".
We already have some comments on the top of the execute_global_sql_commands
function.
or split it into two functions.
Done. I added a new function for outfile.
+ while((fgets(line, MAXPGPATH, pfile)) != NULL) + { + Oid db_oid; + char db_oid_str[MAXPGPATH + 1]; + char dbname[MAXPGPATH + 1]; + + /* Extract dboid. */ + sscanf(line, "%u" , &db_oid); + sscanf(line, "%s" , db_oid_str); + + /* Now copy dbname. */ + strcpy(dbname, line + strlen(db_oid_str) + 1); + + /* Remove \n from dbanme. */ + dbname[strlen(dbname) - 1] = '\0'; + + pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid); + + /* Report error if file has any corrupted data. */ + if (!OidIsValid(db_oid) || strlen(dbname) == 0) + pg_fatal("invalid entry in map.dat file at line : %d", count + 1); + + /* + * XXX : before adding dbname into list, we can verify that this db + * needs to skipped for restore or not but as of now, we are making + * a list of all the databases. + */ + simple_db_oid_list_append(dbname_oid_list, db_oid, dbname); + count++; + }db_oid first should be set to 0, dbname first character first should be
set to 0
(char dbname[0] = '\0') before sscanf call.
so if sscanf fail, the db_oid and dbname value is not undermined)
Okay. Fixed.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v14_pg_dumpall-with-non-text_format-3rd_feb.patchapplication/octet-stream; name=v14_pg_dumpall-with-non-text_format-3rd_feb.patchDownload
From af34451db26a11543196f52464849774e5a3fe59 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 3 Feb 2025 01:27:29 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if db connection,
if no db connection, then PATTERN=NAME matching only
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 31 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 +++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 22 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 552 ++++++++--------
src/bin/pg_dump/pg_restore.c | 773 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1476 insertions(+), 326 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..0609b7eb534 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -20,6 +20,8 @@ PostgreSQL documentation
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
+ or restore multiple <productname>PostgreSQL</productname> database from an
+ archive directory is created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -166,6 +168,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +336,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a0dcdbe0807
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..e91f4b836f6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +455,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1268,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1284,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1663,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1684,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..47589cca90f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,8 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 02e1fdf8f78..61067e1542e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..80341db324d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +479,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +545,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +643,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +656,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -631,12 +667,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1525,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1545,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1553,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1594,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1624,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1643,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1676,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1686,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1765,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1860,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..42c4fe3ce2e 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,71 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +121,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +175,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +204,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +231,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +353,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +466,78 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ if (conn)
+ PQfinish(conn);
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts,
+ numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +563,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -469,6 +603,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +616,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -619,3 +755,626 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If we don't have db connection, then consider patterns as NAME
+ * only.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db counter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1];
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore = 0;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for exclude-database");
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total db:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
+ }
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * execute_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *ofile;
+ int c;
+
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ ofile = fopen(out_file_path, PG_BINARY_W);
+
+ if (ofile == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", out_file_path);
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, ofile);
+
+ fclose(pfile);
+ fclose(ofile);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93dec..cdaf1ad343c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2673,6 +2673,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
Hi,
I found a bug ,while using "./pg_restore pdd -f -" actually it has to copy
everything(global sql commands + remaining dump ) into stdout as per the
"-f, --file=FILENAME output file name (- for stdout)" but it is
copying global sql commands to a file literally naming it as "-" and
remaining dump is written to stdout without those global sql commands."-"
is not a output file it signifies stdout in terminal cmds.so we have to
handle this case.
because of above reason "./pg_restore pdd -g -f -" also does the same
creates a file "-" and writes globals to that file instead of stdout.
This is the delta patch to handle this case.please have a look and give
some feedback.
@@ -84,7 +84,7 @@ static int restoreAllDatabases(PGconn *conn, const char
*dumpdirpath,
SimpleStringList
db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char
*dumpdirpath,
const char *outfile);
-static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static void copy_global_file(const char *outfile, FILE *pfile);
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list,
- ofile = fopen(out_file_path, PG_BINARY_W);
+ if (strcmp(outfile, "-") == 0){
+ int fn = fileno(stdout);
+ ofile = fdopen(dup(fn), PG_BINARY_W);
+ }
+ else{
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ ofile = fopen(out_file_path, PG_BINARY_W);
+ }
+
if (ofile == NULL)
{
Regards,
Srinath Reddy Sadipiralla,
EDB: https://www.enterprisedb.com <http://www.enterprisedb.com/>
Show quoted text
here's the whole version of delta patch
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 42c4fe3ce2..90e6b71a50 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -84,7 +84,7 @@ static int restoreAllDatabases(PGconn *conn, const char
*dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int
numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char
*dumpdirpath,
const char *outfile);
-static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static void copy_global_file(const char *outfile, FILE *pfile);
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
@@ -1178,7 +1178,7 @@ execute_global_sql_commands(PGconn *conn, const char
*dumpdirpath, const char *o
*/
if (outfile)
{
- copy_global_file_to_out_file(outfile, pfile);
+ copy_global_file(outfile, pfile);
return;
}
@@ -1207,24 +1207,35 @@ execute_global_sql_commands(PGconn *conn, const
char *dumpdirpath, const char *o
}
/*
- * copy_global_file_to_out_file
+ * copy_global_file
*
- * This will copy global.dat file into out file.
+ * This will copy global.dat file into out file, if file is given
+ * else copies to stdout.
+ *
*/
static void
-copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+copy_global_file(const char *outfile, FILE *pfile)
{
char out_file_path[MAXPGPATH];
FILE *ofile;
int c;
- snprintf(out_file_path, MAXPGPATH, "%s", outfile);
- ofile = fopen(out_file_path, PG_BINARY_W);
+ if (strcmp(outfile, "-") == 0)
+ {
+ int fn = fileno(stdout);
+ ofile = fdopen(dup(fn), PG_BINARY_W);
+ }
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ ofile = fopen(out_file_path, PG_BINARY_W);
+ }
+
if (ofile == NULL)
{
fclose(pfile);
- pg_fatal("could not open file: \"%s\"", out_file_path);
+ pg_fatal("could not open file: \"%s\"", outfile);
}
/* Now append global.dat into out file. */
Show quoted text
Regards,
Srinath Reddy Sadipiralla,
EDB: https://www.enterprisedb.com <http://www.enterprisedb.com/>
hi.
git clean -fdx && $BIN10/pg_dumpall --format=directory --file=dir10
$BIN10/pg_restore --format=directory --file=1.sql --verbose dir10 >
dir_format 2>&1
there is no "\connect dbname" command.
pipe 1.sql to psql will execute all the database dump into a single
database, which is not good.
we need "\connect dbname" in file 1.sql
--------<<<<<<<>>>>>>>>>>>>>>>------------------
$BIN10/pg_dumpall --format=directory --exclude-database=src10 --file=dir12_temp
drop table t from database x
$BIN10/pg_restore --format=directory --dbname=x --verbose dir12_temp >
dir_format 2>&1
--------log info------------------
pg_restore: found database "template1" (OID: 1) in map.dat file while restoring.
pg_restore: found database "x" (OID: 19554) in map.dat file while restoring.
pg_restore: found total 2 database names in map.dat file
pg_restore: needs to restore 2 databases out of 2 databases
pg_restore: restoring dump of pg_dumpall without -C option, there
might be multiple databases in directory.
pg_restore: restoring database "template1"
pg_restore: connecting to database for restore
pg_restore: implied data-only restore
pg_restore: restoring database "x"
pg_restore: connecting to database for restore
pg_restore: processing data for table "public.t"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3374; 0 19555 TABLE DATA t jian
pg_restore: error: could not execute query: ERROR: relation
"public.t" does not exist
Command was: COPY public.t (a) FROM stdin;
pg_restore: warning: errors ignored on restore: 1
pg_restore: number of restored databases are 2
________________________
$BIN10/pg_restore --format=directory --list dir12_temp
selected output:
; Selected TOC Entries:
;
217; 1259 19555 TABLE public t jian
3374; 0 19555 TABLE DATA public t jian
3228; 2606 19560 CONSTRAINT public t t_pkey jian
As you can see, dir12_temp has TABLE and TABLE DATA.
so the above log message: "pg_restore: implied data-only restore" is
not what we expected.
BTW, add --create option, it works as i expected.
like
$BIN10/pg_restore --format=directory --create --dbname=x --verbose
dir12_temp > dir_format 2>&1
output is what i expected.
--------<<<<<<<>>>>>>>>>>>>>>>------------------
with the changes in filter_dbnames_for_restore.
so <option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option>
will behave differently when you specify the --file option or not.
* --file option specified
-exclude-database=pattern not allow any special wildcard character.
it does not behave the same as the doc mentioned.
* --file option not specified, it behaves the same as the doc mentioned.
That's kind of tricky, either more words in the doc explain the
scarenio where --file option is specified
or disallow --file option when --exclude-database is specified.
we need to update pg_restore.sgml about MAX_ON_EXIT_NICELY 100?
there is some corner like num_db_restore == 0, num_db_restore >= 100
in that scarenio, the execute_global_sql_commands already executed,
which is not ideal, since you have pg_fatal and some sql commands
already executed.
maybe we can be if 0 < num_db_restore < 100 then
call execute_global_sql_commands and restoreAllDatabases.
the attached patch trying to do that.
attached patch also doing some cosmetic changes.
Attachments:
v14-0001-pg_restore-dump-global-objects-at-least-one-d.no-cfbotapplication/octet-stream; name=v14-0001-pg_restore-dump-global-objects-at-least-one-d.no-cfbotDownload
From d0e8fcf4684adf44bf05ae228590afd5bdc52089 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Mon, 3 Feb 2025 17:08:35 +0800
Subject: [PATCH v14 1/1] pg_restore dump global objects at least one database
needs to be restored
call execute_global_sql_commands only when
0 < num_db_restore < MAX_ON_EXIT_NICELY.
and other coesmetic changes.
---
src/bin/pg_dump/pg_restore.c | 52 +++++++++++++++++-------------------
1 file changed, 25 insertions(+), 27 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 42c4fe3ce2..8bd8a1f6da 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -71,8 +71,8 @@ typedef struct SimpleDatabaseOidList
SimpleDatabaseOidListCell *tail;
} SimpleDatabaseOidList;
-static void
-simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
@@ -495,17 +495,15 @@ main(int argc, char **argv)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
}
- /*
- * Open global.dat file and execute/append all the global sql
- * commands.
- */
- execute_global_sql_commands(conn, inputFileSpec, opts->filename);
-
/* If globals-only, then return from here. */
if (globals_only)
{
- if (conn)
- PQfinish(conn);
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
}
else
@@ -515,10 +513,12 @@ main(int argc, char **argv)
db_exclude_patterns,
opts,
numWorkers);
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
}
- /* Free db pattern list. */
- simple_string_full_list_delete(&db_exclude_patterns);
+ if (conn)
+ PQfinish(conn);
return exit_code;
}
@@ -988,7 +988,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
while((fgets(line, MAXPGPATH, pfile)) != NULL)
{
Oid db_oid = InvalidOid;
- char db_oid_str[MAXPGPATH + 1];
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
char dbname[MAXPGPATH + 1] = {'\0'};
/* Extract dboid. */
@@ -1078,16 +1078,12 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
db_exclude_patterns);
- /* Close the db connection as we are done with globals and patterns. */
- if (conn)
- PQfinish(conn);
-
- /* Exit if no db needs to be restored. */
- if (dbname_oid_list.head == NULL)
- return 0;
-
pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
/*
* To restore multiple databases, -C (create database) option should be specified
* or all databases should be created before pg_restore.
@@ -1099,11 +1095,13 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (num_db_restore > MAX_ON_EXIT_NICELY)
{
simple_db_oid_full_list_delete(&dbname_oid_list);
- pg_fatal("cound not restore more than %d databases by single pg_restore, here total db:%d",
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
MAX_ON_EXIT_NICELY,
num_db_restore);
}
+ execute_global_sql_commands(conn, dumpdirpath, opts->filename);
+
/*
* XXX: TODO till now, we made a list of databases, those needs to be restored
* after skipping names of exclude-database. Now we can launch parallel
@@ -1153,8 +1151,8 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
*
* This will open global.dat file and will execute all global sql commands one
* by one statement.
- * Semicolon is considered as statement terminator. If outfile is passed, then
- * this will copy all sql commands into outfile rather then executing them.
+ * Semicolon is considered as statement terminator. If outfile is not NULL, then
+ * we copy all sql commands into outfile rather then executing them.
*/
static void
execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
@@ -1242,7 +1240,7 @@ copy_global_file_to_out_file(const char *outfile, FILE *pfile)
*/
static void
simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname)
+ const char *dbname)
{
SimpleDatabaseOidListCell *cell;
@@ -1310,8 +1308,8 @@ simple_string_full_list_delete(SimpleStringList *list)
*/
static void
simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell,
- SimpleDatabaseOidListCell *prev)
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
{
if (prev == NULL)
{
--
2.34.1
On Mon, Feb 3, 2025 at 5:14 PM jian he <jian.universality@gmail.com> wrote:
there is some corner like num_db_restore == 0, num_db_restore >= 100
in that scarenio, the execute_global_sql_commands already executed,
which is not ideal, since you have pg_fatal and some sql commands
already executed.
maybe we can be if 0 < num_db_restore < 100 then
call execute_global_sql_commands and restoreAllDatabases.the attached patch trying to do that.
attached patch also doing some cosmetic changes.
hi.
please ignore the previous patch. see this email attached patch.
previously I complained that the ``pg_restore --list`` needed a db
connection and also called execute_global_sql_commands in [1]/messages/by-id/CACJufxHUDGWe=2ZukvMfuwEcSK8CsVYm=9+rtPnrW7CRCfoCsw@mail.gmail.com
this email attached patch fixes the problem, now pg_restore --list no
need db connection.
now the logic is:
if num_db_restore value is ok (0 < num_db_restore < MAX_ON_EXIT_NICELY)
*AND* we didn't specify --list option
then call execute_global_sql_commands.
[1]: /messages/by-id/CACJufxHUDGWe=2ZukvMfuwEcSK8CsVYm=9+rtPnrW7CRCfoCsw@mail.gmail.com
Attachments:
v14-0001-fix-pg_restore-list-option-and-handle-invoke-.no-cfbotapplication/octet-stream; name=v14-0001-fix-pg_restore-list-option-and-handle-invoke-.no-cfbotDownload
From bcb3c9ca6e47748f7a5134db9b19e55909677a22 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Mon, 3 Feb 2025 20:48:27 +0800
Subject: [PATCH v14 1/1] fix pg_restore --list option and handle invoke global
objects execution
call execute_global_sql_commands only when
0 < num_db_restore < MAX_ON_EXIT_NICELY.
and other coesmetic changes.
also make pg_restore --list no need database connection, per complain from
https://postgr.es/m/CACJufxHUDGWe=2ZukvMfuwEcSK8CsVYm=9+rtPnrW7CRCfoCsw@mail.gmail.com
now the logic is:
if num_db_restore value is ok (0 < num_db_restore < MAX_ON_EXIT_NICELY)
and we didn't specify --list option then call execute_global_sql_commands
---
src/bin/pg_dump/pg_restore.c | 54 ++++++++++++++++++------------------
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 42c4fe3ce2..eb20079cb8 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -71,8 +71,8 @@ typedef struct SimpleDatabaseOidList
SimpleDatabaseOidListCell *tail;
} SimpleDatabaseOidList;
-static void
-simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid, const char *dbname);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
@@ -495,17 +495,15 @@ main(int argc, char **argv)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
}
- /*
- * Open global.dat file and execute/append all the global sql
- * commands.
- */
- execute_global_sql_commands(conn, inputFileSpec, opts->filename);
-
/* If globals-only, then return from here. */
if (globals_only)
{
- if (conn)
- PQfinish(conn);
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ execute_global_sql_commands(conn, inputFileSpec, opts->filename);
+
pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
}
else
@@ -515,10 +513,12 @@ main(int argc, char **argv)
db_exclude_patterns,
opts,
numWorkers);
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
}
- /* Free db pattern list. */
- simple_string_full_list_delete(&db_exclude_patterns);
+ if (conn)
+ PQfinish(conn);
return exit_code;
}
@@ -988,7 +988,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
while((fgets(line, MAXPGPATH, pfile)) != NULL)
{
Oid db_oid = InvalidOid;
- char db_oid_str[MAXPGPATH + 1];
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
char dbname[MAXPGPATH + 1] = {'\0'};
/* Extract dboid. */
@@ -1078,16 +1078,12 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
db_exclude_patterns);
- /* Close the db connection as we are done with globals and patterns. */
- if (conn)
- PQfinish(conn);
-
- /* Exit if no db needs to be restored. */
- if (dbname_oid_list.head == NULL)
- return 0;
-
pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
/*
* To restore multiple databases, -C (create database) option should be specified
* or all databases should be created before pg_restore.
@@ -1099,11 +1095,15 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (num_db_restore > MAX_ON_EXIT_NICELY)
{
simple_db_oid_full_list_delete(&dbname_oid_list);
- pg_fatal("cound not restore more than %d databases by single pg_restore, here total db:%d",
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
MAX_ON_EXIT_NICELY,
num_db_restore);
}
+ /* print out summary, don't need execute the global objects related statement */
+ if (!opts->tocSummary)
+ execute_global_sql_commands(conn, dumpdirpath, opts->filename);
+
/*
* XXX: TODO till now, we made a list of databases, those needs to be restored
* after skipping names of exclude-database. Now we can launch parallel
@@ -1153,8 +1153,8 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
*
* This will open global.dat file and will execute all global sql commands one
* by one statement.
- * Semicolon is considered as statement terminator. If outfile is passed, then
- * this will copy all sql commands into outfile rather then executing them.
+ * Semicolon is considered as statement terminator. If outfile is not NULL, then
+ * we copy all sql commands into outfile rather then executing them.
*/
static void
execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
@@ -1242,7 +1242,7 @@ copy_global_file_to_out_file(const char *outfile, FILE *pfile)
*/
static void
simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname)
+ const char *dbname)
{
SimpleDatabaseOidListCell *cell;
@@ -1310,8 +1310,8 @@ simple_string_full_list_delete(SimpleStringList *list)
*/
static void
simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell,
- SimpleDatabaseOidListCell *prev)
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
{
if (prev == NULL)
{
--
2.34.1
On Mon, 3 Feb 2025 at 14:23, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi,
I found a bug ,while using "./pg_restore pdd -f -" actually it has to copy everything(global sql commands + remaining dump ) into stdout as per the "-f, --file=FILENAME output file name (- for stdout)" but it is copying global sql commands to a file literally naming it as "-" and remaining dump is written to stdout without those global sql commands."-" is not a output file it signifies stdout in terminal cmds.so we have to handle this case.
because of above reason "./pg_restore pdd -g -f -" also does the same creates a file "-" and writes globals to that file instead of stdout.
I also tested this but in my testing, I can see that all globals are
printed into the console also. Patch was creating a "-" file that was
wrong.
Yes, we should consider "-" as a stdout. In the latest patch, I have
fixed this issue.
On Mon, 3 Feb 2025 at 14:44, jian he <jian.universality@gmail.com> wrote:
hi.
git clean -fdx && $BIN10/pg_dumpall --format=directory --file=dir10
$BIN10/pg_restore --format=directory --file=1.sql --verbose dir10 >
dir_format 2>&1there is no "\connect dbname" command.
pipe 1.sql to psql will execute all the database dump into a single
database, which is not good.
we need "\connect dbname" in file 1.sql
We can't add this command directly to the dump file. We need to add
some TOC entry for this command. I will try to make a TOC entry for
this command.
--------<<<<<<<>>>>>>>>>>>>>>>------------------
$BIN10/pg_dumpall --format=directory --exclude-database=src10 --file=dir12_temp
drop table t from database x
$BIN10/pg_restore --format=directory --dbname=x --verbose dir12_temp >
dir_format 2>&1
--------log info------------------
pg_restore: found database "template1" (OID: 1) in map.dat file while restoring.
pg_restore: found database "x" (OID: 19554) in map.dat file while restoring.
pg_restore: found total 2 database names in map.dat file
pg_restore: needs to restore 2 databases out of 2 databases
pg_restore: restoring dump of pg_dumpall without -C option, there
might be multiple databases in directory.
pg_restore: restoring database "template1"
pg_restore: connecting to database for restore
pg_restore: implied data-only restore
pg_restore: restoring database "x"
pg_restore: connecting to database for restore
pg_restore: processing data for table "public.t"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3374; 0 19555 TABLE DATA t jian
pg_restore: error: could not execute query: ERROR: relation
"public.t" does not exist
Command was: COPY public.t (a) FROM stdin;
pg_restore: warning: errors ignored on restore: 1
pg_restore: number of restored databases are 2
________________________
$BIN10/pg_restore --format=directory --list dir12_temp
selected output:; Selected TOC Entries:
;
217; 1259 19555 TABLE public t jian
3374; 0 19555 TABLE DATA public t jian
3228; 2606 19560 CONSTRAINT public t t_pkey jianAs you can see, dir12_temp has TABLE and TABLE DATA.
so the above log message: "pg_restore: implied data-only restore" is
not what we expected.
I will do some tests with pg_dump and -t option.
BTW, add --create option, it works as i expected.
like
$BIN10/pg_restore --format=directory --create --dbname=x --verbose
dir12_temp > dir_format 2>&1
output is what i expected.--------<<<<<<<>>>>>>>>>>>>>>>------------------
with the changes in filter_dbnames_for_restore.
so <option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option>
will behave differently when you specify the --file option or not.* --file option specified
-exclude-database=pattern not allow any special wildcard character.
it does not behave the same as the doc mentioned.
* --file option not specified, it behaves the same as the doc mentioned.That's kind of tricky, either more words in the doc explain the
scarenio where --file option is specified
or disallow --file option when --exclude-database is specified.
We will do some more doc changes for this in next versions.
we need to update pg_restore.sgml about MAX_ON_EXIT_NICELY 100?
Temporary, we increased this size. Based on other opinions, we will do
more changes for this.
there is some corner like num_db_restore == 0, num_db_restore >= 100
in that scarenio, the execute_global_sql_commands already executed,
which is not ideal, since you have pg_fatal and some sql commands
already executed.
maybe we can be if 0 < num_db_restore < 100 then
call execute_global_sql_commands and restoreAllDatabases.
Got it. Fixed it as per delta patch and added some extra condition to
the IF clause.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v15_pg_dumpall-with-non-text_format-4th_feb.patchapplication/octet-stream; name=v15_pg_dumpall-with-non-text_format-4th_feb.patchDownload
From cfa2902fd835f9afe8c7ae571615b5d24900fa3a Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 3 Feb 2025 23:18:47 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if db connection,
if no db connection, then PATTERN=NAME matching only
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 31 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 ++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 22 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 552 ++++++++--------
src/bin/pg_dump/pg_restore.c | 791 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1494 insertions(+), 326 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..0609b7eb534 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -20,6 +20,8 @@ PostgreSQL documentation
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
+ or restore multiple <productname>PostgreSQL</productname> database from an
+ archive directory is created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -166,6 +168,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +336,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a0dcdbe0807
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..e91f4b836f6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +455,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1268,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1284,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1663,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1684,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..47589cca90f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,8 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 02e1fdf8f78..61067e1542e 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..80341db324d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +479,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +545,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +643,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +656,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -631,12 +667,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1525,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1545,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1553,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1594,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1624,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1643,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1676,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1686,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1765,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1860,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..444415d2ee0 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,70 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +120,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +174,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +203,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +230,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +352,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +383,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +465,80 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ if (!opts->tocSummary || opts->filename)
+ process_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts,
+ numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +564,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -469,6 +604,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +617,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -619,3 +756,643 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If we don't have db connection, then consider patterns as NAME
+ * only.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db counter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore = 0;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for --exclude-database");
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ if (!opts->tocSummary || opts->filename)
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return 0;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then it will use into stdout.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93dec..cdaf1ad343c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2673,6 +2673,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
hi.
just a quick response for v15.
the pg_restore man page says option --list as "List the table of
contents of the archive".
but
$BIN10/pg_restore --format=directory --list --file=1.sql dir10
also output the contents of "global.dat", we should not output it.
in restoreAllDatabases, we can do the following change:
```
/* Open global.dat file and execute/append all the global sql commands. */
if (!opts->tocSummary)
process_global_sql_commands(conn, dumpdirpath, opts->filename);
```
what should happen with
$BIN10/pg_restore --format=directory --globals-only --verbose dir10 --list
Should we error out saying "--globals-only" and "--list" are conflict options?
if so then in main function we can do the following change:
```
if (globals_only)
{
process_global_sql_commands(conn, inputFileSpec, opts->filename);
if (conn)
PQfinish(conn);
pg_log_info("databases restoring is skipped as -g/--globals-only
option is specified");
}
```
in restoreAllDatabases, if num_db_restore == 0, we will still call
process_global_sql_commands.
I am not sure this is what we expected.
hi.
This attached patch solves problems mentioned in [1]/messages/by-id/CACJufxFrzYJ0oZNm=v9hg10UpPQNe+p0+2ydNirHxyhUT_JtXw@mail.gmail.com.
so pg_restore --file restoring multiple databases will have the
```\connect dbname``` command in it.
the output plain text file can be used in psql.
pg_restore --file output will be:
--
-- Database "template1" dump
--
-- Dumped from database version 18devel_debug_build_622f678c10
-- Dumped by pg_dump version 18devel_debug_build_622f678c10
-- Started on 2025-02-04 14:34:44 CST
\connect template1
.....
-- Completed on 2025-02-04 14:34:53 CST
--
-- Database "template1" dump complete
--
[1]: /messages/by-id/CACJufxFrzYJ0oZNm=v9hg10UpPQNe+p0+2ydNirHxyhUT_JtXw@mail.gmail.com
Attachments:
v15-0001-make-pg_restore-file-option-using-connect-for.no-cfbotapplication/octet-stream; name=v15-0001-make-pg_restore-file-option-using-connect-for.no-cfbotDownload
From a46d28371f62b51712f24f687dedfa1cfdcca342 Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Tue, 4 Feb 2025 15:24:59 +0800
Subject: [PATCH v15 1/1] make pg_restore --file option using \connect for
retsoring multiple databases.
This patch solves problems mentioned in [1].
so pg_restore --file output is sane.
we print out the database name we are restoring through
```ahprintf(AH, "--\n-- Database \"%s\" dump\n--\n\n", dbname);```.
So the pg_restore --file output comments make it easy to distinguish which database contents we are dumping.
overall it will be like:
---------------------------------------------------------------
--
-- Database "template1" dump
--
-- Dumped from database version 18devel_debug_build_622f678c10
-- Dumped by pg_dump version 18devel_debug_build_622f678c10
-- Started on 2025-02-04 14:34:44 CST
\connect template1
.....
-- Completed on 2025-02-04 14:34:53 CST
--
-- Database "template1" dump complete
--
---------------------------------------------------------------
[1] https://postgr.es/m/CACJufxFrzYJ0oZNm=v9hg10UpPQNe+p0+2ydNirHxyhUT_JtXw@mail.gmail.com
---
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 21 ++++++++++++++-------
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_restore.c | 12 +++++++-----
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 65000e5a08..729ffc9e12 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX, bool append_data);
+extern void RestoreArchive(Archive *AHX, bool append_data, const char *dbname);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index e91f4b836f..fd6fd16642 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -336,9 +336,11 @@ ProcessArchiveRestoreOptions(Archive *AHX)
*
* If append_data is set, then append data into file as we are restoring dump
* of multiple databases which was taken by pg_dumpall.
+ * If dbname is not NULL, then pg_restore restore archive to file will have
+ * comments about which database currently is being dumped.
*/
void
-RestoreArchive(Archive *AHX, bool append_data)
+RestoreArchive(Archive *AHX, bool append_data, const char *dbname)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -457,7 +459,10 @@ RestoreArchive(Archive *AHX, bool append_data)
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
- ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
if (AH->archiveRemoteVersion)
ahprintf(AH, "-- Dumped from database version %s\n",
@@ -797,8 +802,10 @@ RestoreArchive(Archive *AHX, bool append_data)
if (AH->public.verbose)
dumpTimestamp(AH, "Completed on", time(NULL));
- ahprintf(AH, "--\n-- PostgreSQL database dump complete\n--\n\n");
-
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump complete\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
/*
* Clean up & we're done.
*/
@@ -2926,13 +2933,13 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH)
/*
* DATABASE and DATABASE PROPERTIES also have a special rule: they are
- * restored in createDB mode, and not restored otherwise, independently of
- * all else.
+ * restored in createDB mode or restored format is not plain file, and not
+ * restored otherwise, independently of all else.
*/
if (strcmp(te->desc, "DATABASE") == 0 ||
strcmp(te->desc, "DATABASE PROPERTIES") == 0)
{
- if (ropt->createDB)
+ if (ropt->createDB || AH->format != archNull)
return REQ_SCHEMA;
else
return 0;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index d94d0de2a5..45f0fb46e0 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH, false);
+ RestoreArchive((Archive *) AH, false, NULL);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 61067e1542..51c595a7a5 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout, false);
+ RestoreArchive(fout, false, NULL);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 444415d2ee..b216873773 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -75,7 +75,8 @@ static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
static bool IsFileExistsInDirectory(const char *dir, const char *filename);
static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data);
+ int numWorkers, bool append_data,
+ const char *dbname);
static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
@@ -525,16 +526,17 @@ main(int argc, char **argv)
}
}
- return restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
}
/*
* restoreOneDatabase
*
* This will restore one database using toc.dat file.
+ * dbname is the current to be restored database name.
*/
static int
restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data)
+ int numWorkers, bool append_data, const char *dbname)
{
Archive *AH;
int exit_code;
@@ -568,7 +570,7 @@ restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH, append_data);
+ RestoreArchive(AH, append_data, dbname);
}
/* done, print a summary of ignored errors */
@@ -1138,7 +1140,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
- dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true, dboid_cell->db_name);
/* Store exit_code to report it back. */
if (exit_code == 0 && dbexit_code != 0)
--
2.34.1
Thanks Jian.
On Tue, 4 Feb 2025 at 07:35, jian he <jian.universality@gmail.com> wrote:
hi.
just a quick response for v15.
the pg_restore man page says option --list as "List the table of
contents of the archive".
but
$BIN10/pg_restore --format=directory --list --file=1.sql dir10
also output the contents of "global.dat", we should not output it.
I think we can add an error for --list option if used with the dump of
pg_dumpall. If a user wants to use --list option, then they can use a
single dump file.
in restoreAllDatabases, we can do the following change:
```
/* Open global.dat file and execute/append all the global sql commands. */
if (!opts->tocSummary)
process_global_sql_commands(conn, dumpdirpath, opts->filename);
```what should happen with
$BIN10/pg_restore --format=directory --globals-only --verbose dir10 --listShould we error out saying "--globals-only" and "--list" are conflict options?
if so then in main function we can do the following change:
Fixed.
```
if (globals_only)
{
process_global_sql_commands(conn, inputFileSpec, opts->filename);
if (conn)
PQfinish(conn);
pg_log_info("databases restoring is skipped as -g/--globals-only
option is specified");
}
```in restoreAllDatabases, if num_db_restore == 0, we will still call
process_global_sql_commands.
I am not sure this is what we expected.
This is correct. We should run global commands as we are dumping those
even if we don't dump any database.
Apart from these, I merged v15 delta to print db names. Either we can
print the db name or we can remove also but as of now, I merged delta
patch.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v16_pg_dumpall-with-non-text_format-11th_feb.patchapplication/octet-stream; name=v16_pg_dumpall-with-non-text_format-11th_feb.patchDownload
From 36aea11e07ee22838d9698faa0fa5518e329abc4 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 11 Feb 2025 10:58:04 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if db connection,
if no db connection, then PATTERN=NAME matching only
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 31 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 ++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 41 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 552 +++++++---------
src/bin/pg_dump/pg_restore.c | 797 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1513 insertions(+), 332 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..0609b7eb534 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -20,6 +20,8 @@ PostgreSQL documentation
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
+ or restore multiple <productname>PostgreSQL</productname> database from an
+ archive directory is created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -166,6 +168,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +336,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a0dcdbe0807
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..729ffc9e124 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, const char *dbname);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 707a3fc844c..fd6fd16642e 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,16 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ * If dbname is not NULL, then pg_restore restore archive to file will have
+ * comments about which database currently is being dumped.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, const char *dbname)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,9 +457,12 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
- ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
if (AH->archiveRemoteVersion)
ahprintf(AH, "-- Dumped from database version %s\n",
@@ -792,8 +802,10 @@ RestoreArchive(Archive *AHX)
if (AH->public.verbose)
dumpTimestamp(AH, "Completed on", time(NULL));
- ahprintf(AH, "--\n-- PostgreSQL database dump complete\n--\n\n");
-
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump complete\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
/*
* Clean up & we're done.
*/
@@ -1263,7 +1275,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1291,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1670,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1691,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
@@ -2920,13 +2933,13 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH)
/*
* DATABASE and DATABASE PROPERTIES also have a special rule: they are
- * restored in createDB mode, and not restored otherwise, independently of
- * all else.
+ * restored in createDB mode or restored format is not plain file, and not
+ * restored otherwise, independently of all else.
*/
if (strcmp(te->desc, "DATABASE") == 0 ||
strcmp(te->desc, "DATABASE PROPERTIES") == 0)
{
- if (ropt->createDB)
+ if (ropt->createDB || AH->format != archNull)
return REQ_SCHEMA;
else
return 0;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..45f0fb46e0f 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, NULL);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..47589cca90f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,8 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 02e1fdf8f78..51c595a7a5f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, NULL);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 396f79781c5..80341db324d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +479,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +545,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -607,7 +643,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -620,7 +656,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -631,12 +667,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1487,10 +1525,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1504,7 +1545,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1512,9 +1553,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1529,6 +1594,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1547,9 +1624,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1558,19 +1643,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1580,7 +1676,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1589,17 +1686,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1649,256 +1765,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1994,3 +1860,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..af7d815a770 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,71 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data,
+ const char *dbname);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +121,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +175,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +204,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +231,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +353,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +466,86 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when using dump of pg_dumpall");
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ process_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts,
+ numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ * dbname is the current to be restored database name.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, const char *dbname)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +571,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, dbname);
}
/* done, print a summary of ignored errors */
@@ -469,6 +611,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +624,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -619,3 +763,642 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If we don't have db connection, then consider patterns as NAME
+ * only.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db counter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore = 0;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for --exclude-database");
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return 0;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true, dboid_cell->db_name);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93dec..cdaf1ad343c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2673,6 +2673,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
hi.
review based on v16.
because of
/messages/by-id/CAFC+b6pWQiSL+3rvLxN9vhC8aONp4OV9c6u+BVD6kmWmDbd1WQ@mail.gmail.com
in copy_global_file_to_out_file, now it is:
if (strcmp(outfile, "-") == 0)
OPF = stdout;
I am confused, why "-" means stdout.
``touch ./- `` command works fine.
i think dash is not special character, you may see
https://stackoverflow.com/a/40650391/15603477
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
here we should use pg_fatal?
pg_log_info("executing %s", sqlstatement.data);
change to
pg_log_info("executing query: %s", sqlstatement.data);
message would be more similar to the next pg_log_error(...) message.
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when using dump of pg_dumpall");
maybe change to
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases");
$BIN10/pg_restore --format=directory --list dir10_x
if the directory only has one database, then we can actually print out
the tocSummary.
if the directory has more than one database then pg_fatal.
To tolerate this corner case (only one database) means that pg_restore
--list requires a DB connection,
but I am not sure that is fine.
anyway, the attached patch allows this corner case.
PrintTOCSummary can only print out summary for a single database.
so we don't need to change PrintTOCSummary.
+ /*
+ * To restore multiple databases, -C (create database) option should
be specified
+ * or all databases should be created before pg_restore.
+ */
+ if (opts->createDB != 1)
+ pg_log_info("restoring dump of pg_dumpall without -C option, there
might be multiple databases in directory.");
we can change it to
+ if (opts->createDB != 1 && num_db_restore > 0)
+ pg_log_info("restoring multiple databases without -C option.");
Bug.
when pg_restore --globals-only can be applied when we are restoring a
single database (can be an output of pg_dump).
There are some tests per https://commitfest.postgresql.org/52/5495, I
will check it later.
The attached patch is the change for the above reviews.
Attachments:
v16_misc_changes.nocfbotapplication/octet-stream; name=v16_misc_changes.nocfbotDownload
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 729ffc9e12..1e00fedacd 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX, bool append_data);
+extern void PrintTOCSummary(Archive *AHX);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 32d645728a..144d97d0f4 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -1275,7 +1275,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX, bool append_data)
+PrintTOCSummary(Archive *AHX)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1291,7 +1291,7 @@ PrintTOCSummary(Archive *AHX, bool append_data)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec, append_data);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 0b6e974380..b75e4f56f3 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1567,7 +1567,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
/* Create a subdirectory with 'databases' name under main directory. */
if (mkdir(db_subdir, 0755) != 0)
- pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index af7d815a77..6dd82f08f6 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -121,7 +121,7 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
- bool globals_only = false;
+ bool globals_only = false;
SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
@@ -391,6 +391,13 @@ main(int argc, char **argv)
exit_nicely(1);
}
+ if (opts->tocSummary && globals_only)
+ {
+ pg_log_error("option -l/--list cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -481,12 +488,6 @@ main(int argc, char **argv)
PGconn *conn = NULL; /* Connection to restore global sql commands. */
int exit_code = 0;
- /*
- * User is suggested to use single database dump for --list option.
- */
- if (opts->tocSummary)
- pg_fatal("option -l/--list cannot be used when using dump of pg_dumpall");
-
/*
* Connect to database to execute global sql commands from
* global.dat file.
@@ -531,6 +532,9 @@ main(int argc, char **argv)
}
}
+ if(inputFileSpec != NULL && globals_only)
+ pg_fatal("could not specify --globals-only when restoring a single database");
+
return restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
}
/*
@@ -571,7 +575,7 @@ restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH, append_data);
+ PrintTOCSummary(AH);
else
{
ProcessArchiveRestoreOptions(AH);
@@ -1090,8 +1094,11 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* To restore multiple databases, -C (create database) option should be specified
* or all databases should be created before pg_restore.
*/
- if (opts->createDB != 1)
- pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+ if (opts->createDB != 1 && num_db_restore > 1)
+ pg_log_info("restoring multiple databases without -C option.");
+
+ if (opts->tocSummary && num_db_restore > 1)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases");
/* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
if (num_db_restore > MAX_ON_EXIT_NICELY)
@@ -1103,7 +1110,8 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
}
/* Open global.dat file and execute/append all the global sql commands. */
- process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ if (!opts->tocSummary)
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
/* Close the db connection as we are done with globals and patterns. */
if (conn)
@@ -1223,8 +1231,7 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
/*
* copy_global_file_to_out_file
*
- * This will copy global.dat file into out file. If "-" is used as outfile,
- * then print commands to the stdout.
+ * This will copy global.dat file into outfile.
*/
static void
copy_global_file_to_out_file(const char *outfile, FILE *pfile)
@@ -1233,19 +1240,13 @@ copy_global_file_to_out_file(const char *outfile, FILE *pfile)
FILE *OPF;
int c;
- /* "-" is used for stdout. */
- if (strcmp(outfile, "-") == 0)
- OPF = stdout;
- else
- {
- snprintf(out_file_path, MAXPGPATH, "%s", outfile);
- OPF = fopen(out_file_path, PG_BINARY_W);
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
- if (OPF == NULL)
- {
- fclose(pfile);
- pg_fatal("could not open file: \"%s\"", outfile);
- }
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
}
/* Now append global.dat into out file. */
@@ -1253,10 +1254,7 @@ copy_global_file_to_out_file(const char *outfile, FILE *pfile)
fputc(c, OPF);
fclose(pfile);
-
- /* Close out file. */
- if (strcmp(outfile, "-") != 0)
- fclose(OPF);
+ fclose(OPF);
}
/*
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index de41ec06d8..8bb9edc5b5 100755
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -224,6 +224,11 @@ command_fails_like(
qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+command_fails_like(
+ [ 'pg_restore', '--list', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option -l/--list cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
On Tue, 11 Feb 2025 at 20:40, jian he <jian.universality@gmail.com> wrote:
hi.
review based on v16.because of
/messages/by-id/CAFC+b6pWQiSL+3rvLxN9vhC8aONp4OV9c6u+BVD6kmWmDbd1WQ@mail.gmail.com
in copy_global_file_to_out_file, now it is:
if (strcmp(outfile, "-") == 0)
OPF = stdout;
I am confused, why "-" means stdout.
``touch ./- `` command works fine.
i think dash is not special character, you may see
https://stackoverflow.com/a/40650391/15603477
"-" is used for stdout. This is mentioned in the doc.
pg_restore link <https://www.postgresql.org/docs/current/app-pgrestore.html>
-f *filename*
--file=*filename*Specify output file for generated script, or for the listing when used
with -l. Use - for stdout.
+ /* Create a subdirectory with 'databases' name under main directory. */ + if (mkdir(db_subdir, 0755) != 0) + pg_log_error("could not create subdirectory \"%s\": %m", db_subdir); here we should use pg_fatal?
Yes, we should use pg_fatal.
pg_log_info("executing %s", sqlstatement.data);
change to
pg_log_info("executing query: %s", sqlstatement.data);
message would be more similar to the next pg_log_error(...) message.
Okay.
+ /* + * User is suggested to use single database dump for --list option. + */ + if (opts->tocSummary) + pg_fatal("option -l/--list cannot be used when using dump of
pg_dumpall");
maybe change to
+ pg_fatal("option -l/--list cannot be used when restoring multiple
databases");
okay.
$BIN10/pg_restore --format=directory --list dir10_x
if the directory only has one database, then we can actually print out
the tocSummary.
if the directory has more than one database then pg_fatal.
To tolerate this corner case (only one database) means that pg_restore
--list requires a DB connection,
but I am not sure that is fine.
anyway, the attached patch allows this corner case.
No, we don't need this corner case. If a user wants to restore a
single database with --list option, then the user should give a particular
dump file with pg_restore.
PrintTOCSummary can only print out summary for a single database.
so we don't need to change PrintTOCSummary.+ /* + * To restore multiple databases, -C (create database) option should be specified + * or all databases should be created before pg_restore. + */ + if (opts->createDB != 1) + pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");we can change it to + if (opts->createDB != 1 && num_db_restore > 0) + pg_log_info("restoring multiple databases without -C option.");
okay.
Bug.
when pg_restore --globals-only can be applied when we are restoring a
single database (can be an output of pg_dump).
As of now, we are ignoring this option. We can add an error in the "else"
part of the global.dat file.
Ex: option --globals-only is only supported with dump of pg_dumpall.
Similarly --exclude-database also.
There are some tests per https://commitfest.postgresql.org/52/5495, I
will check it later.
The attached patch is the change for the above reviews.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Wed, Feb 12, 2025 at 1:17 AM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:
There are some tests per https://commitfest.postgresql.org/52/5495, I
will check it later.
hi.
the cfbot failure is related to function _tocEntryRequired
if (strcmp(te->desc, "DATABASE") == 0 ||
strcmp(te->desc, "DATABASE PROPERTIES") == 0)
{
- if (ropt->createDB)
+ if (ropt->createDB || AH->format != archNull)
return REQ_SCHEMA;
else
return 0;
for restoring multiple databases:
in v16 implementation: pg_restore even if you do not specify --create,
it actually did what pg_restore --create option does.
if there are multiple databases in the archive:
to make the pg_restore --file output is usable, the output file need
have \connect and CREATE DATABASE
command. that is exactly what --create option would do.
pg_restore --file behavior need align with pg_restore --dbname.
therefore pg_restore restoring multiple databases will use --create option.
we can either error out (pg_fatal) saying
restoring multiple databases requires the pg_restore --create option.
Or we can add a pg_log_info saying
pg_restore --create option will be set to true while restoring
multiple databases.
for restoring one database, the master behavior is fine.
so we don't need to change _tocEntryRequired.
Attachments:
v16_misc.nocfbotapplication/octet-stream; name=v16_misc.nocfbotDownload
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 32d645728a..1dfa0420ea 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -2934,13 +2934,13 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH)
/*
* DATABASE and DATABASE PROPERTIES also have a special rule: they are
- * restored in createDB mode or restored format is not plain file, and not
- * restored otherwise, independently of all else.
+ * restored in createDB mode, and not restored otherwise, independently of
+ * all else.
*/
if (strcmp(te->desc, "DATABASE") == 0 ||
strcmp(te->desc, "DATABASE PROPERTIES") == 0)
{
- if (ropt->createDB || AH->format != archNull)
+ if (ropt->createDB)
return REQ_SCHEMA;
else
return 0;
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index af7d815a77..2d3ae14f75 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -1090,8 +1090,11 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* To restore multiple databases, -C (create database) option should be specified
* or all databases should be created before pg_restore.
*/
- if (opts->createDB != 1)
- pg_log_info("restoring dump of pg_dumpall without -C option, there might be multiple databases in directory.");
+ if (num_db_restore > 1 && opts->createDB != 1)
+ {
+ pg_log_info("restoring multiple databases without -C option, implicit -C is assumed");
+ opts->createDB = 1;
+ }
/* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
if (num_db_restore > MAX_ON_EXIT_NICELY)
@@ -1144,7 +1147,10 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
- dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true, dboid_cell->db_name);
+ if (num_db_restore == 1)
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, false, dboid_cell->db_name);
+ else
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers, true, dboid_cell->db_name);
/* Store exit_code to report it back. */
if (exit_code == 0 && dbexit_code != 0)
Thanks Jian.
On Wed, 12 Feb 2025 at 12:45, jian he <jian.universality@gmail.com> wrote:
On Wed, Feb 12, 2025 at 1:17 AM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:There are some tests per https://commitfest.postgresql.org/52/5495, I
will check it later.hi.
the cfbot failure is related to function _tocEntryRequiredif (strcmp(te->desc, "DATABASE") == 0 || strcmp(te->desc, "DATABASE PROPERTIES") == 0) { - if (ropt->createDB) + if (ropt->createDB || AH->format != archNull) return REQ_SCHEMA; else return 0;for restoring multiple databases:
in v16 implementation: pg_restore even if you do not specify --create,
it actually did what pg_restore --create option does.if there are multiple databases in the archive:
to make the pg_restore --file output is usable, the output file need
have \connect and CREATE DATABASE
command. that is exactly what --create option would do.
pg_restore --file behavior need align with pg_restore --dbname.
therefore pg_restore restoring multiple databases will use --create option.we can either error out (pg_fatal) saying
restoring multiple databases requires the pg_restore --create option.
Or we can add a pg_log_info saying
pg_restore --create option will be set to true while restoring
multiple databases.
In my earlier version, I was giving an error if --create option was
not specified.
I think it will be good and more preferable if we give an error
without the --create option if dump was taken from pg_dumpall. Even
though there is a single database in the dump of pg_dumpall, it is
possible that a particular database hasn't been created.
Ex: -d postgres and we have db1 dump in file. In this case, we have
only one database dump but this database has not been created.
If the user wants to restore a single database, then the user should
use a single database dump file. Forcefully adding --create option is
not a good idea, instead we will give an error to the user and let him
correct the inputs.
Apart from the above handling, I fixed all the pending review comments
in this patch and made some more changes.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v17_pg_dumpall-with-non-text_format-13th_feb.patchapplication/octet-stream; name=v17_pg_dumpall-with-non-text_format-13th_feb.patchDownload
From 6c02c291516e812943dc5e1f3e7122c77579a5c6 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 13 Feb 2025 16:41:18 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if db connection,
if no db connection, then PATTERN=NAME matching only
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
---
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 31 +
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 ++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 35 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 552 +++++++--------
src/bin/pg_dump/pg_restore.c | 819 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1529 insertions(+), 332 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..924d5bac817 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -20,6 +20,8 @@ PostgreSQL documentation
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
+ or restore multiple <productname>PostgreSQL</productname> database from an
+ archive directory created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -166,6 +168,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +336,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a0dcdbe0807
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..729ffc9e124 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, const char *dbname);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index b9d7ab98c3e..1dfa0420ea2 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,16 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ * If dbname is not NULL, then pg_restore restore archive to file will have
+ * comments about which database currently is being dumped.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, const char *dbname)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,9 +457,12 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
- ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
if (AH->archiveRemoteVersion)
ahprintf(AH, "-- Dumped from database version %s\n",
@@ -792,8 +802,10 @@ RestoreArchive(Archive *AHX)
if (AH->public.verbose)
dumpTimestamp(AH, "Completed on", time(NULL));
- ahprintf(AH, "--\n-- PostgreSQL database dump complete\n--\n\n");
-
+ if (append_data && dbname != NULL)
+ ahprintf(AH, "--\n-- Database \"%s\" dump complete\n--\n\n", dbname);
+ else
+ ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
/*
* Clean up & we're done.
*/
@@ -1263,7 +1275,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1291,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1670,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1691,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..45f0fb46e0f 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, NULL);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..47589cca90f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,8 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 30dfda8c3ff..7efd59999d3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, NULL);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 64a60a26092..b75e4f56f31 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +479,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +545,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -608,7 +644,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -621,7 +657,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -632,12 +668,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1488,10 +1526,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1505,7 +1546,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1513,9 +1554,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1530,6 +1595,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1548,9 +1625,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1559,19 +1644,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1581,7 +1677,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1590,17 +1687,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1650,256 +1766,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1995,3 +1861,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..d5431297a1e 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,71 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data,
+ const char *dbname);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +121,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +175,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +204,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +231,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +353,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +466,106 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option
+ * should be specified.
+ * Even there is single database in dump, report error because it
+ * might be possible that database hasn't created so better we
+ * report error.
+ */
+ if (opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ process_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts,
+ numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ * dbname is the current to be restored database name.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, const char *dbname)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +591,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, dbname);
}
/* done, print a summary of ignored errors */
@@ -451,7 +613,8 @@ main(int argc, char **argv)
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -469,6 +632,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +645,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -513,8 +678,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -619,3 +784,637 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no db to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dboid_list variable, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If we don't have db connection, then consider patterns as NAME
+ * only.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if db needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++; /* Increment db counter. */
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore = 0;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /* If map.dat has no entry, return from here. */
+ if (dbname_oid_list.head == NULL)
+ return 0;
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as simple name for --exclude-database");
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = filter_dbnames_for_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return 0;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database and save exit_code. */
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers,
+ true, dboid_cell->db_name);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b6c170ac249..5851589b0b2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2674,6 +2674,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
Hi,
i think during restore we should not force user to use -C during cases like
./pg_restore pdd -g -f -
./pg_restore pdd -a -f -
./pg_restore pdd -s -f -
because its not good to use -C to create database every time when we are
using these options individually.
latest patch throws following error for all the above cases
pg_restore: error: -C/--create option should be specified when restoring
multiple databases by archive of pg_dumpall
pg_restore: hint: Try "pg_restore --help" for more information.
pg_restore: hint: If db is already created and dump has single db dump,
then use particular dump file.
Thanks and Regards
Srinath Reddy Sadipiralla
EDB: https://www.enterprisedb.com <http://www.enterprisedb.com/>
Srinath Reddy Sadipiralla,
hi.
<refnamediv>
<refname>pg_restore</refname>
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
or restore multiple <productname>PostgreSQL</productname> database from an
archive directory created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
i think it's way too verbose. we can change it to:
<refpurpose>
restore <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application> or
<application>pg_dumpall</application>
</refpurpose>
<para>
<application>pg_restore</application> is a utility for restoring a
<productname>PostgreSQL</productname> database from an archive
created by <xref linkend="app-pgdump"/> in one of the non-plain-text
formats.
we can change it to
<para>
<application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> databases from an archive
created by <xref linkend="app-pgdump"/> or <xref
linkend="app-pgdumpall"/> in one of the non-plain-text
formats.
similarly, pg_dumpall first 3 sentences in the description section
needs to change.
in pg_restore.sgml <option>--create</option section,
maybe we can explicitly mention that restoring multiple databases,
<option>--create</option> is required.
like: "This option is required when restoring multiple databases."
restoreAllDatabases
+ if (!conn)
+ pg_log_info("there is no database connection so consider pattern as
simple name for --exclude-database");
filter_dbnames_for_restore
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database
option as no db connection while doing pg_restore.");
these two log messages sent out the same information.
maybe we can remove the first one, and change the second to
if (!conn && db_exclude_patterns.head != NULL)
pg_log_info("considering PATTERN as NAME for
--exclude-database option as no db connection while doing
pg_restore.");
as mentioned in the previous thread, there is no need to change PrintTOCSummary.
another minor issue about comments.
I guess we can tolerate this minor issue.
$BIN10/pg_restore --format=tar --create --file=1.sql
--exclude-database=src10 --verbose tar10 > dir_format 2>&1
1.sql file will copy tar10/global.dat as is. but we already excluded
src10. but 1.sql will still have comments as
--
-- Database "src10" dump
--
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_dumpall --format=custom --file=x2.dump
Currently x1.dump/global.dat is differ from x2.dump/global.dat
if we dump multiple databases using pg_dumpall we have
"
--
-- Databases
--
--
-- Database "template1" dump
--
--
-- Database "src10" dump
--
--
-- Database "x" dump
--
"
maybe there are not need, since we already have map.dat file
I am not sure if the following is as expected or not.
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_restore --create --file=3.sql --globals-only x1.dump --verbose
$BIN10/pg_restore --create --file=3.sql x1.dump --verbose
the first pg_restore command will copy x1.dump/global.dat as is to 3.sql,
the second pg_restore will not copy anything to 3.sql.
but the second command implies copying global dumps to 3.sql?
On Tue, Feb 18, 2025 at 2:10 PM jian he <jian.universality@gmail.com> wrote:
hi.
hi. more cosmetic minor issues.
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list)
...
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
i think the above comment in get_dbname_oid_list_from_mfile is not necessary.
we already have comments in filter_dbnames_for_restore.
in get_dbname_oid_list_from_mfile:
```
pfile = fopen(map_file_path, PG_BINARY_R);
if (pfile == NULL)
pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
```
file does not exist, we use pg_fatal, so if the directory does not
exist, we should also use pg_fatal.
so
if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
{
pg_log_info("databases restoring is skipped as map.dat file is
not present in \"%s\"", dumpdirpath);
return 0;
}
can be
if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
pg_fatal("map.dat file: \"%s\"/map.dat does not exists", dumpdirpath);
+ /* Report error if file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
i think the comments should be
+ /* Report error and exit if the file has any corrupted data. */
+/*
+ * filter_dbnames_for_restore
+ *
+ * This will remove names from all dblist those can
+ * be constructed from database_exclude_pattern list.
+ *
+ * returns number of dbnames those will be restored.
+ */
+static int
+filter_dbnames_for_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
there is no "database_exclude_pattern" list, so the above comments are
slightly wrong.
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.sql file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
here, "global sql" should change to "gloal.dat".
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
does this mean pg_dumpall --no-sync option only works for plain format.
if so, we need to update the pg_dumpall --no-sync section.
On Tue, 18 Feb 2025 at 10:00, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi,
i think during restore we should not force user to use -C during cases like
./pg_restore pdd -g -f -
./pg_restore pdd -a -f -
./pg_restore pdd -s -f -
because its not good to use -C to create database every time when we are using these options individually.
latest patch throws following error for all the above cases
-g => we can allow this case without the -C option.
-a and -s => user should use this option with a single database (i
mean user should use a particular dump file to restore, not full dump
directory of all the databases.)
As pg_dumpall dumps all the databases in create mode, we should either
use --create option in our code or we should give an error. I think,
error is a good option if the user is using a dump of pg_dumpall.
If the user wants to use all the options, then the user should use a
single database dump path.
If we allow users without the --create option, then pg_restore will
create all the tables under a single database even if those tables are
in different databases.
I will fix the -g option(1st test case) in the next patch.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
hi.
Currently, pg_retore says
--exit-on-error
Exit if an error is encountered while sending SQL commands to the
database. The default is to continue and to display a count of errors
at the end of the restoration.
Do we need to apply this to restore executing global commands (create
role, create tablespace)?
If not then we need to put some words in pg_restoe --exit-on-error
option saying that while restoring global objects --exit-on-error
option is ignored.
IMHO, in pg_restore.sgml, we need words explicitly saying that
when restoring multiple databases, all the specified options will
apply to each individual database.
I tested the following options for restoring multiple databases. The
results look good to me.
--index=index
--table=table
--schema-only
--transaction-size
--no-comments
some part of (--filter=filename)
--exclude-schema=schema
attach is a minor cosmetic change.
Attachments:
v17_pg_dumpall.minorchangeapplication/octet-stream; name=v17_pg_dumpall.minorchangeDownload
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index d5431297a1..b19e6f0181 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -880,10 +880,10 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
/*
* filter_dbnames_for_restore
*
- * This will remove names from all dblist those can
- * be constructed from database_exclude_pattern list.
+ * This will remove entries from dbname_oid_list that pattern matching any in the
+ * db_exclude_patterns list. dbname_oid_list maybe inplace modified.
*
- * returns number of dbnames those will be restored.
+ * returns number of database will be restored.
*/
static int
filter_dbnames_for_restore(PGconn *conn,
@@ -922,7 +922,7 @@ filter_dbnames_for_restore(PGconn *conn,
* pg_catalog.default
*
* XXX represents the string literal database name derived from the
- * dboid_list variable, which is initially extracted from the
+ * dbname_oid_list, which is initially extracted from the
* map.dat file located in the backup directory. that's why we
* need quote_literal_cstr.
*
@@ -972,7 +972,7 @@ filter_dbnames_for_restore(PGconn *conn,
}
else
{
- count_db++; /* Increment db counter. */
+ count_db++;
dboidprecell = dboid_cell;
}
@@ -1160,7 +1160,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
/* Restore single database and save exit_code. */
dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers,
- true, dboid_cell->db_name);
+ true, dboid_cell->db_name);
/* Store exit_code to report it back. */
if (exit_code == 0 && dbexit_code != 0)
@@ -1282,7 +1282,7 @@ copy_global_file_to_out_file(const char *outfile, FILE *pfile)
*/
static void
simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname)
+ const char *dbname)
{
SimpleDatabaseOidListCell *cell;
@@ -1350,8 +1350,8 @@ simple_string_full_list_delete(SimpleStringList *list)
*/
static void
simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell,
- SimpleDatabaseOidListCell *prev)
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
{
if (prev == NULL)
{
On Tue, 18 Feb 2025 at 10:00, Srinath Reddy <srinath2133@gmail.com> wrote:
Hi,
i think during restore we should not force user to use -C during cases like
./pg_restore pdd -g -f -
./pg_restore pdd -a -f -
./pg_restore pdd -s -f -
because its not good to use -C to create database every time when we are using these options individually.
latest patch throws following error for all the above cases
Fixed. (./pg_restore pdd -g -f -)
Thanks Jian and Srinath for the review and testing.
On Tue, 18 Feb 2025 at 11:41, jian he <jian.universality@gmail.com> wrote:
hi.
<refnamediv>
<refname>pg_restore</refname>
<refpurpose>
restore a <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application>
or restore multiple <productname>PostgreSQL</productname> database from an
archive directory created by <application>pg_dumpall</application>
</refpurpose>
</refnamediv>i think it's way too verbose. we can change it to:
<refpurpose>
restore <productname>PostgreSQL</productname> database from an
archive file created by <application>pg_dump</application> or
<application>pg_dumpall</application>
</refpurpose>
Fixed.
<para>
<application>pg_restore</application> is a utility for restoring a
<productname>PostgreSQL</productname> database from an archive
created by <xref linkend="app-pgdump"/> in one of the non-plain-text
formats.
we can change it to
<para>
<application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> databases from an archive
created by <xref linkend="app-pgdump"/> or <xref
linkend="app-pgdumpall"/> in one of the non-plain-text
formats.
Fixed.
similarly, pg_dumpall first 3 sentences in the description section
needs to change.
I think we can keep them for pg_dumpall.
in pg_restore.sgml <option>--create</option section,
maybe we can explicitly mention that restoring multiple databases,
<option>--create</option> is required.
like: "This option is required when restoring multiple databases."
Fixed.
restoreAllDatabases + if (!conn) + pg_log_info("there is no database connection so consider pattern as simple name for --exclude-database"); filter_dbnames_for_restore + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");these two log messages sent out the same information.
maybe we can remove the first one, and change the second to
if (!conn && db_exclude_patterns.head != NULL)
pg_log_info("considering PATTERN as NAME for
--exclude-database option as no db connection while doing
pg_restore.");
Fixed.
as mentioned in the previous thread, there is no need to change PrintTOCSummary.
Yes, I removed it.
another minor issue about comments.
I guess we can tolerate this minor issue.
$BIN10/pg_restore --format=tar --create --file=1.sql
--exclude-database=src10 --verbose tar10 > dir_format 2>&1
1.sql file will copy tar10/global.dat as is. but we already excluded
src10. but 1.sql will still have comments as
--
-- Database "src10" dump
--
Fixed.
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_dumpall --format=custom --file=x2.dumpCurrently x1.dump/global.dat is differ from x2.dump/global.dat
if we dump multiple databases using pg_dumpall we have
"
--
-- Databases
--
--
-- Database "template1" dump
--
--
-- Database "src10" dump
--
--
-- Database "x" dump
--
"
maybe there are not need, since we already have map.dat file
Okay. Fixed.
I am not sure if the following is as expected or not.
$BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only
$BIN10/pg_restore --create --file=3.sql --globals-only x1.dump --verbose
$BIN10/pg_restore --create --file=3.sql x1.dump --verbosethe first pg_restore command will copy x1.dump/global.dat as is to 3.sql,
the second pg_restore will not copy anything to 3.sql.
but the second command implies copying global dumps to 3.sql?
We should copy global.dat. I fixed this in the v18 patch.
On Tue, 18 Feb 2025 at 14:02, jian he <jian.universality@gmail.com> wrote:
On Tue, Feb 18, 2025 at 2:10 PM jian he <jian.universality@gmail.com> wrote:
hi.
hi. more cosmetic minor issues.
+static int +get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list) ... + /* + * XXX : before adding dbname into list, we can verify that this db + * needs to skipped for restore or not but as of now, we are making + * a list of all the databases. + */ i think the above comment in get_dbname_oid_list_from_mfile is not necessary. we already have comments in filter_dbnames_for_restore.
As of now, I am keeping this comment as this will be helpful while
implementing parallel pg_restore.
in get_dbname_oid_list_from_mfile:
```
pfile = fopen(map_file_path, PG_BINARY_R);
if (pfile == NULL)
pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
```
file does not exist, we use pg_fatal, so if the directory does not
exist, we should also use pg_fatal.
so
if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
{
pg_log_info("databases restoring is skipped as map.dat file is
not present in \"%s\"", dumpdirpath);
return 0;
}
can be
if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
pg_fatal("map.dat file: \"%s\"/map.dat does not exists", dumpdirpath);
No, we can't add FATAL here as in case of global-only dump, we will
not have a map.dat file.
+ /* Report error if file has any corrupted data. */ + if (!OidIsValid(db_oid) || strlen(dbname) == 0) + pg_fatal("invalid entry in map.dat file at line : %d", count + 1); i think the comments should be + /* Report error and exit if the file has any corrupted data. */
Fixed.
+/* + * filter_dbnames_for_restore + * + * This will remove names from all dblist those can + * be constructed from database_exclude_pattern list. + * + * returns number of dbnames those will be restored. + */ +static int +filter_dbnames_for_restore(PGconn *conn, + SimpleDatabaseOidList *dbname_oid_list, there is no "database_exclude_pattern" list, so the above comments are slightly wrong.
Fixed.
+/* + * ReadOneStatement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.sql file) + * + * EOF is returned if end-of-file input is seen; time to shut down. + */ here, "global sql" should change to "gloal.dat".
Fixed.
/* sync the resulting file, errors are not fatal */ - if (dosync) + if (dosync && (archDumpFormat == archNull)) (void) fsync_fname(filename, false); does this mean pg_dumpall --no-sync option only works for plain format. if so, we need to update the pg_dumpall --no-sync section.
As of now, we are using this option with plain format as we dump
server commands in different db file. We can test this more.
On Wed, 19 Feb 2025 at 17:08, jian he <jian.universality@gmail.com> wrote:
hi.
Currently, pg_retore says
--exit-on-error
Exit if an error is encountered while sending SQL commands to the
database. The default is to continue and to display a count of errors
at the end of the restoration.
Do we need to apply this to restore executing global commands (create
role, create tablespace)?
If not then we need to put some words in pg_restoe --exit-on-error
option saying that while restoring global objects --exit-on-error
option is ignored.
I think this is the same for all pg_restore commands. Still if we want
to add some docs, we can put.
IMHO, in pg_restore.sgml, we need words explicitly saying that
when restoring multiple databases, all the specified options will
apply to each individual database.
We can skip this extra info. I will try in the next version if we can
add something in doc.
I tested the following options for restoring multiple databases. The
results look good to me.
--index=index
--table=table
--schema-only
--transaction-size
--no-comments
some part of (--filter=filename)
--exclude-schema=schema
Thank you for detailed testing.
attach is a minor cosmetic change.
Okay.
Here, I am attaching an updated patch for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v18_pg_dumpall-with-non-text_format-19th_feb.patchapplication/octet-stream; name=v18_pg_dumpall-with-non-text_format-19th_feb.patchDownload
From 0b4864eedf20b0a323d88d5158bcbfa0d6ffda94 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Feb 2025 17:47:27 +0530
Subject: [PATCH] pg_dumpall with directory|tar|custom format and restore it
by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
TODO1: We need to think for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if db connection,
if no db connection, then PATTERN=NAME matching only
TODO2: We need to make changes for exit_nicely as we are one entry for each database while
restoring. MAX_ON_EXIT_NICELY
TODO3: some more test cases for new added options.
TODO4: We can dump and restore databases in parallel mode.
This needs more study
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/Makefile | 8 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 ++++++++
src/bin/pg_dump/common_dumpall_restore.h | 26 +
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 22 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 552 +++++++--------
src/bin/pg_dump/pg_restore.c | 827 ++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1533 insertions(+), 333 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..835b3315713 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,11 +47,11 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..a0dcdbe0807
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+/* TODO: increasing this to keep 100 db restoring by single pg_restore command. */
+#define MAX_ON_EXIT_NICELY 100
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
@@ -68,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index b9d7ab98c3e..b8b07562069 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +455,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1268,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1284,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1663,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1684,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..47589cca90f 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -21,8 +22,6 @@
/* Globals exported by this file */
const char *progname = NULL;
-#define MAX_ON_EXIT_NICELY 20
-
static struct
{
on_exit_nicely_callback function;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 30dfda8c3ff..2b28cb39905 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 64a60a26092..b75e4f56f31 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -24,14 +25,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -64,28 +68,25 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -107,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -121,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -147,6 +146,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +188,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +239,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +267,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +418,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -460,6 +479,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -468,7 +514,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +523,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -496,19 +545,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -608,7 +644,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -621,7 +657,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -632,12 +668,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1488,10 +1526,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1505,7 +1546,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1513,9 +1554,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1530,6 +1595,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1548,9 +1625,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1559,19 +1644,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1581,7 +1677,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1590,17 +1687,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1650,256 +1766,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
@@ -1995,3 +1861,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..8974ee886d5 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,71 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data,
+ const char *dbname);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static void process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +121,14 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +175,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +204,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +231,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +353,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +466,106 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat"))
+ {
+ /* If global.dat is exist, then process it. */
+ if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+ int exit_code = 0;
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option
+ * should be specified.
+ * Even there is single database in dump, report error because it
+ * might be possible that database hasn't created so better we
+ * report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from
+ * global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ process_global_sql_commands(conn, inputFileSpec, opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ exit_code = restoreAllDatabases(conn, inputFileSpec,
+ db_exclude_patterns,
+ opts,
+ numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+
+ return exit_code;
+ }
+ }
+
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ return restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
+}
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ * dbname is the current to be restored database name.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, const char *dbname)
+{
+ Archive *AH;
+ int exit_code;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,11 +591,11 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
/* done, print a summary of ignored errors */
@@ -451,7 +613,8 @@ main(int argc, char **argv)
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -469,6 +632,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +645,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -513,8 +678,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -619,3 +784,645 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int exit_code = 0;
+ int num_db_restore = 0;
+ int num_total_db;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ {
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ return 0;
+ }
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* TODO: MAX_ON_EXIT_NICELY is 100 now... max AH handle register on exit .*/
+ if (num_db_restore > MAX_ON_EXIT_NICELY)
+ {
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+ pg_fatal("cound not restore more than %d databases by single pg_restore, here total database:%d",
+ MAX_ON_EXIT_NICELY,
+ num_db_restore);
+ }
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return 0;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * XXX: TODO till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int dbexit_code;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database and save exit_code. */
+ dbexit_code = restoreOneDatabase(subdirpath, opts, numWorkers,
+ true, dboid_cell->db_name);
+
+ /* Store exit_code to report it back. */
+ if (exit_code == 0 && dbexit_code != 0)
+ exit_code = dbexit_code;
+
+ dboid_cell = dboid_cell->next;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return exit_code;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ */
+static void
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ fclose(pfile);
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 80aa50d55a4..02a01251b04 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2676,6 +2676,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
Hello,
I think the business with an evergrowing on_exit list needs a different
solution than a gigantic array of entries. Maybe it would make sense to
restructure that code so that there's a single on_exit item, but there
exists a list of per-database entries to clean up which are all done in
one call of the function. Then you don't need to change the hardcoded
MAX_ON_EXIT_NICELY array size there.
I think it would be better to have a preparatory 0001 patch that just
moves the code to the new files, without touching anything else, and
then the new feature is introduced as a separate 0002 commit.
You still have a bunch of XXX items here and there which look to me like
they need to be handled before this patch can be considered final, plus
the TODOs in the commit message. Please pgindent.
Thanks
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"Porque francamente, si para saber manejarse a uno mismo hubiera que
rendir examen... ¿Quién es el machito que tendría carnet?" (Mafalda)
Thanks Álvaro for feedback.
On Thu, 20 Feb 2025 at 02:39, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hello,
I think the business with an evergrowing on_exit list needs a different
solution than a gigantic array of entries. Maybe it would make sense to
restructure that code so that there's a single on_exit item, but there
exists a list of per-database entries to clean up which are all done in
one call of the function. Then you don't need to change the hardcoded
MAX_ON_EXIT_NICELY array size there.
In the latest patch, I added one new function to clean
index(on_exit_nicely_index) with each database restore.
I think it would be better to have a preparatory 0001 patch that just
moves the code to the new files, without touching anything else, and
then the new feature is introduced as a separate 0002 commit.
Fixed.
You still have a bunch of XXX items here and there which look to me like
they need to be handled before this patch can be considered final, plus
Fixed.
the TODOs in the commit message. Please pgindent.
I am facing some errors in pgindent. I will run pgindent in the next version.
Here, I am attaching updated patches for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v19_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v19_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From 7a7c2173ea035c0ca073533973545f061df4b492 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 20 Feb 2025 09:10:23 +0530
Subject: [PATCH 1/2] move common code of pg_dumpall and pg_restore to new file
connectDatabase is used by both pg_dumpall and pg_restore so
move common code to new file.
---
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/common_dumpall_restore.c | 286 +++++++++++++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 24 ++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_dumpall.c | 268 +--------------------
5 files changed, 321 insertions(+), 262 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..86006d111c3 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -50,8 +50,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..b162cf69412
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,286 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * This is a common file for pg_dumpall and pg_restore.
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the global variable 'connstr' is set to a connection string
+ * containing the options used.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. Remember the options used, in the form of a
+ * connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If needed, then copy server version to outer function. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..7fe1c00ab71
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..97dbfaeb67f 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 64a60a26092..e279903f469 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -20,6 +20,7 @@
#include "catalog/pg_authid_d.h"
#include "common/connect.h"
+#include "common_dumpall_restore.h"
#include "common/file_utils.h"
#include "common/hashfn_unstable.h"
#include "common/logging.h"
@@ -71,12 +72,6 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
@@ -85,7 +80,7 @@ static void read_dumpall_filters(const char *filename, SimpleStringList *pattern
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -468,7 +463,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -476,10 +472,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -1650,256 +1648,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v19_0002_pg_dumpall-with-non-text_format-20th_feb.patchapplication/octet-stream; name=v19_0002_pg_dumpall-with-non-text_format-20th_feb.patchDownload
From 046918e5701034ce8d591b2d681c59d15735964f Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 20 Feb 2025 11:56:25 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning index of exit_nicely array.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 4 +-
src/bin/pg_dump/pg_backup_archiver.c | 22 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 18 +
src/bin/pg_dump/pg_backup_utils.h | 1 +
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 285 +++++++--
src/bin/pg_dump/pg_restore.c | 846 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1243 insertions(+), 74 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 39d93c2c0e3..6e1975f5ff0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b8b27e1719e..835b3315713 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 86006d111c3..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,8 +47,8 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 97dbfaeb67f..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -69,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index f0f19bb0b29..65000e5a083 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -306,7 +306,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX);
+extern void PrintTOCSummary(Archive *AHX, bool append_data);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index b9d7ab98c3e..b8b07562069 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -82,7 +82,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -331,9 +331,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -450,7 +455,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1263,7 +1268,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX)
+PrintTOCSummary(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1279,7 +1284,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, append_data);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1658,7 +1663,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1678,7 +1684,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..0ac4e6c32da 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -13,6 +13,7 @@
*/
#include "postgres_fe.h"
+#include "common_dumpall_restore.h"
#ifdef WIN32
#include "parallel.h"
#endif
@@ -104,3 +105,20 @@ exit_nicely(int code)
exit(code);
}
+
+/*
+ * reset_exit_nicely_array
+ *
+ * cleans index of exit_nicely array.
+ */
+void
+reset_exit_nicely_array(int code)
+{
+ int i;
+
+ for (i = on_exit_nicely_index - 1; i >= 0; i--)
+ on_exit_nicely_list[i].function(code,
+ on_exit_nicely_list[i].arg);
+
+ on_exit_nicely_index = 0;
+}
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index 38551944513..3be5c5c31a2 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -30,6 +30,7 @@ extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
extern void exit_nicely(int code) pg_attribute_noreturn();
+extern void reset_exit_nicely_array(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
#undef pg_fatal
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 30dfda8c3ff..2b28cb39905 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1148,7 +1148,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e279903f469..11c034bc6bc 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -25,14 +26,17 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -65,9 +69,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +81,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -102,7 +109,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
@@ -116,8 +123,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -142,6 +147,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -183,6 +189,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -232,7 +240,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -260,7 +268,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -409,6 +419,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -455,6 +480,33 @@ main(int argc, char *argv[])
if (on_conflict_do_nothing)
appendPQExpBufferStr(pgdumpopts, " --on-conflict-do-nothing");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -494,19 +546,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -606,7 +645,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -619,7 +658,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -630,12 +669,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1486,10 +1527,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1503,7 +1547,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1511,9 +1555,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1528,6 +1596,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1546,9 +1626,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1557,19 +1645,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1579,7 +1678,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1588,17 +1688,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1743,3 +1862,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c602272d7db..8a78a52fb09 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,27 +41,71 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data,
+ const char *dbname);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
static int disable_triggers = 0;
static int enable_row_security = 0;
@@ -77,11 +121,15 @@ main(int argc, char **argv)
static int strict_names = 0;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -128,6 +176,7 @@ main(int argc, char **argv)
{"no-security-labels", no_argument, &no_security_labels, 1},
{"no-subscriptions", no_argument, &no_subscriptions, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -156,7 +205,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -183,11 +232,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -302,6 +354,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -329,6 +385,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -404,6 +467,107 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, NULL);
+ }
+
+ /* done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ * dbname is the current to be restored database name.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, const char *dbname)
+{
+ Archive *AH;
+ int n_errors = 0;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -429,29 +593,28 @@ main(int argc, char **argv)
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH);
+ PrintTOCSummary(AH, append_data);
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
+ /* return number of errors */
if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -469,6 +632,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -481,6 +645,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -513,8 +678,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -619,3 +784,654 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers,
+ true, dboid_cell->db_name);
+
+ /* print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on \"%s\" database restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+
+ /*
+ * We need to reset EXIT_NICELY with each database so that we can restore
+ * multiple databases by archive.
+ */
+ if (dboid_cell != NULL)
+ reset_exit_nicely_array(n_errors ? 1 : 0);
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns, number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Now open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_global_file_to_out_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on globald.dat restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_global_file_to_out_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Now append global.dat into out file. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 214240f1ae5..de41ec06d86
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -219,6 +219,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -226,4 +231,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fb39c915d76..e0f84e15352 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2677,6 +2677,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
hi.
about 0001
/*
* connectDatabase
*
* Make a database connection with the given parameters. An
* interactive password prompt is automatically issued if required.
*
* If fail_on_error is false, we return NULL without printing any message
* on failure, but preserve any prompted password for the next try.
*
* On success, the global variable 'connstr' is set to a connection string
* containing the options used.
*/
PGconn *
connectDatabase(const char *dbname, const char *connection_string,
const char *pghost, const char *pgport, const char *pguser,
trivalue prompt_password, bool fail_on_error, const
char *progname,
const char **connstr, int *server_version)
do the comments need to change? since no
global variable 'connstr' in common_dumpall_restore.c
maybe we need some words to explain server_version, (i don't have a
huge opinion though).
/*-------------------------------------------------------------------------
*
* common_dumpall_restore.c
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* This is a common file for pg_dumpall and pg_restore.
* src/bin/pg_dump/common_dumpall_restore.c
*
*-------------------------------------------------------------------------
*/
may change to
/*-------------------------------------------------------------------------
*
* common_dumpall_restore.c
* This is a common file for pg_dumpall and pg_restore.
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/bin/pg_dump/common_dumpall_restore.c
*
*-------------------------------------------------------------------------
*/
so the style aligns with most other files.
(we can apply the same logic to src/bin/pg_dump/common_dumpall_restore.h)
in src/bin/pg_dump/pg_dumpall.c
#include "common_dumpall_restore.h"
imply include "pg_backup.h".
so in src/bin/pg_dump/pg_dumpall.c, we don't need include "pg_backup.h"
attached are minor cosmetic changes for v19.
Attachments:
v19_minor_change.no-cfbotapplication/octet-stream; name=v19_minor_change.no-cfbotDownload
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 65000e5a083..273c92044a4 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -319,7 +319,7 @@ extern Archive *CreateArchive(const char *FileSpec, const ArchiveFormat fmt,
DataDirSyncMethod sync_method);
/* The --list option */
-extern void PrintTOCSummary(Archive *AHX, bool append_data);
+extern void PrintTOCSummary(Archive *AHX);
extern RestoreOptions *NewRestoreOptions(void);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index b8b07562069..21595e6dd6f 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -1268,7 +1268,7 @@ ArchiveEntry(Archive *AHX, CatalogId catalogId, DumpId dumpId,
/* Public */
void
-PrintTOCSummary(Archive *AHX, bool append_data)
+PrintTOCSummary(Archive *AHX)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -1284,7 +1284,7 @@ PrintTOCSummary(Archive *AHX, bool append_data)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec, append_data);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 8a78a52fb09..5295129313b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -122,7 +122,7 @@ main(int argc, char **argv)
bool data_only = false;
bool schema_only = false;
int n_errors = 0;
- bool globals_only = false;
+ bool globals_only = false;
SimpleStringList db_exclude_patterns = {NULL, NULL};
struct option cmdopts[] = {
@@ -559,14 +559,13 @@ main(int argc, char **argv)
* This will restore one database using toc.dat file.
* dbname is the current to be restored database name.
*
- * returns, number of errors while doing restore.
+ * returns the number of errors while doing restore.
*/
static int
restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, const char *dbname)
{
Archive *AH;
- int n_errors = 0;
AH = OpenArchive(inputFileSpec, opts->format);
@@ -593,21 +592,17 @@ restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
AH->numWorkers = numWorkers;
if (opts->tocSummary)
- PrintTOCSummary(AH, append_data);
+ PrintTOCSummary(AH);
else
{
ProcessArchiveRestoreOptions(AH);
RestoreArchive(AH, append_data);
}
- /* return number of errors */
- if (AH->n_errors)
- n_errors = AH->n_errors;
-
/* AH may be freed in CloseArchive? */
CloseArchive(AH);
- return n_errors;
+ return AH->n_errors;
}
static void
@@ -1163,7 +1158,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (n_errors)
{
n_errors_total += n_errors;
- pg_log_warning("errors ignored on \"%s\" database restore: %d", dboid_cell->db_name, n_errors);
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
}
dboid_cell = dboid_cell->next;
@@ -1193,7 +1188,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* Semicolon is considered as statement terminator. If outfile is passed, then
* this will copy all sql commands into outfile rather then executing them.
*
- * returns, number of errors while processing global.dat
+ * returns the number of errors while processing global.dat
*/
static int
process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
On Thu, 20 Feb 2025 at 14:48, jian he <jian.universality@gmail.com> wrote:
hi.
about 0001/*
* connectDatabase
*
* Make a database connection with the given parameters. An
* interactive password prompt is automatically issued if required.
*
* If fail_on_error is false, we return NULL without printing any message
* on failure, but preserve any prompted password for the next try.
*
* On success, the global variable 'connstr' is set to a connection string
* containing the options used.
*/
PGconn *
connectDatabase(const char *dbname, const char *connection_string,
const char *pghost, const char *pgport, const char
*pguser,
trivalue prompt_password, bool fail_on_error, const
char *progname,
const char **connstr, int *server_version)
do the comments need to change? since no
global variable 'connstr' in common_dumpall_restore.c
maybe we need some words to explain server_version, (i don't have a
huge opinion though).
Fixed.
/*-------------------------------------------------------------------------
*
* common_dumpall_restore.c
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* This is a common file for pg_dumpall and pg_restore.
* src/bin/pg_dump/common_dumpall_restore.c
*
*-------------------------------------------------------------------------
*/
may change to
/*-------------------------------------------------------------------------
*
* common_dumpall_restore.c
* This is a common file for pg_dumpall and pg_restore.
*
* Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* IDENTIFICATION
* src/bin/pg_dump/common_dumpall_restore.c
*
*-------------------------------------------------------------------------
*/
so the style aligns with most other files.
Fixed.
(we can apply the same logic to src/bin/pg_dump/common_dumpall_restore.h)
We are already doing the same in the .h file.
in src/bin/pg_dump/pg_dumpall.c
#include "common_dumpall_restore.h"
imply include "pg_backup.h".
so in src/bin/pg_dump/pg_dumpall.c, we don't need include "pg_backup.h"
Fixed. Also I removed some extra .h files from the patch.
attached are minor cosmetic changes for v19.
- /* return number of errors */
- if (AH->n_errors)
- n_errors = AH->n_errors;
-
/* AH may be freed in CloseArchive? */
CloseArchive(AH);
As per this comment, we can't return AH->n_errors as this might already be
freed so we should copy before CloseArchive.
Here, I am attaching updated patches for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v20_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v20_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From a45313ac9e343d7edd6545354f595afc71bf29c4 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 20 Feb 2025 16:27:17 +0530
Subject: [PATCH 1/2] move common code of pg_dumpall and pg_restore to new file
connectDatabase is used by both pg_dumpall and pg_restore so
move common code to new file.
---
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/common_dumpall_restore.c | 289 +++++++++++++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 24 ++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_dumpall.c | 268 +--------------------
5 files changed, 324 insertions(+), 262 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..86006d111c3 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -50,8 +50,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..92f52b7239a
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,289 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ * This is a common file for pg_dumpall and pg_restore.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in the
+ * form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..7fe1c00ab71
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..97dbfaeb67f 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index b993b05cc22..a6dafb92377 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -20,6 +20,7 @@
#include "catalog/pg_authid_d.h"
#include "common/connect.h"
+#include "common_dumpall_restore.h"
#include "common/file_utils.h"
#include "common/hashfn_unstable.h"
#include "common/logging.h"
@@ -71,12 +72,6 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
@@ -85,7 +80,7 @@ static void read_dumpall_filters(const char *filename, SimpleStringList *pattern
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -484,7 +479,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -492,10 +488,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -1670,256 +1668,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v20_0002_pg_dumpall-with-non-text_format-20th_feb.patchapplication/octet-stream; name=v20_0002_pg_dumpall-with-non-text_format-20th_feb.patchDownload
From 6d3e8052cba5a80514fcce8b2f479cf1929e86fc Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 20 Feb 2025 19:09:55 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 23 +-
src/bin/pg_dump/pg_backup_utils.h | 1 +
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 286 +++++++--
src/bin/pg_dump/pg_restore.c | 841 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1236 insertions(+), 78 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index f0823765c4e..39e7f8ddb89 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b4031708430..b1b6b2bba8d 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 86006d111c3..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,8 +47,8 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 97dbfaeb67f..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -69,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 350cf659c41..f36a9ae7cd9 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -308,7 +308,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 632077113a4..fb0711527bf 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -336,9 +336,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -455,7 +460,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1291,7 +1296,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1670,7 +1675,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1690,7 +1696,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..0f6ad8b21d8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -91,11 +91,7 @@ on_exit_nicely(on_exit_nicely_callback function, void *arg)
void
exit_nicely(int code)
{
- int i;
-
- for (i = on_exit_nicely_index - 1; i >= 0; i--)
- on_exit_nicely_list[i].function(code,
- on_exit_nicely_list[i].arg);
+ reset_exit_nicely_list(code);
#ifdef WIN32
if (parallel_init_done && GetCurrentThreadId() != mainThreadId)
@@ -104,3 +100,20 @@ exit_nicely(int code)
exit(code);
}
+
+/*
+ * reset_exit_nicely_list
+ *
+ * cleans index of exit_nicely list.
+ */
+void
+reset_exit_nicely_list(int code)
+{
+ int i;
+
+ for (i = on_exit_nicely_index - 1; i >= 0; i--)
+ on_exit_nicely_list[i].function(code,
+ on_exit_nicely_list[i].arg);
+
+ on_exit_nicely_index = 0;
+}
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index 38551944513..c2249a88185 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -30,6 +30,7 @@ extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
extern void exit_nicely(int code) pg_attribute_noreturn();
+extern void reset_exit_nicely_list(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
#undef pg_fatal
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index afd79287177..ce225c689de 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1185,7 +1185,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index a6dafb92377..ada7d548d86 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -25,14 +26,16 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -65,9 +68,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +80,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -105,7 +111,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
static int statistics_only = 0;
@@ -120,8 +126,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -191,6 +196,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -240,7 +247,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -268,7 +275,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -417,6 +426,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -471,6 +495,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -510,19 +561,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -622,7 +660,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -635,7 +673,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -646,12 +684,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1506,10 +1546,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1523,7 +1566,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1531,9 +1574,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1548,6 +1615,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1566,9 +1645,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1577,19 +1664,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1599,7 +1697,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1608,17 +1707,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1763,3 +1881,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 13e4dc507e0..5856bbe9da4 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,76 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -86,6 +132,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -136,6 +183,7 @@ main(int argc, char **argv)
{"no-statistics", no_argument, &no_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -164,7 +212,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "aAcCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -191,11 +239,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -310,6 +361,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -337,6 +392,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -417,6 +479,106 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -446,25 +608,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -482,6 +641,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -494,6 +654,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -530,8 +691,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -636,3 +797,653 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+
+ /*
+ * We need to reset on_exit_nicely_index with each database so that we can restore
+ * multiple databases by archive. See EXIT_NICELY macro for more details.
+ */
+ if (dboid_cell != NULL)
+ reset_exit_nicely_list(n_errors ? 1 : 0);
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 98ab45adfa3..189f8414412 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2678,6 +2678,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
hi.
v20-0001
in src/bin/pg_dump/pg_dumpall.c, we have:
static const char *connstr = "";
case 'd':
connstr = pg_strdup(optarg);
break;
i am not sure you can declare it as "const" for connstr.
since connstr value can be changed.
``#include "pg_backup.h"`` can be removed from src/bin/pg_dump/pg_dumpall.c
Other than that, v20_0001 looks good to me.
v20_0002
const char *formatName = "p";
formatName should not be declared as "const", since its value can be changed.
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
can change to
if (mkdir(db_subdir, pg_dir_create_mode) != 0)
pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
then in src/bin/pg_dump/pg_dumpall.c need add ``#include "common/file_perm.h"``
similarly
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
can change to
``
else if (mkdir(dirname, pg_dir_create_mode) != 0)
pg_fatal("could not create directory \"%s\": %m", dirname);
``
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database
option as no db connection while doing pg_restore.");
"db connection" maybe "database connection" or "connection"
+ /*
+ * We need to reset on_exit_nicely_index with each database so that
we can restore
+ * multiple databases by archive. See EXIT_NICELY macro for more details.
+ */
+ if (dboid_cell != NULL)
+ reset_exit_nicely_list(n_errors ? 1 : 0);
i don't fully understand this part, anyway, i think EXIT_NICELY, you mean
MAX_ON_EXIT_NICELY?
just found out, parseArchiveFormat looks familiar with parseDumpFormat.
for all the options in pg_restore.
--list option is not applicable to multiple databases, therefore
option --use-list=list-file also not applicable,
in the doc we should mention it.
global.dat comments should not mention "cluster", "global objects"
would be more appropriate.
global.dat comments should not mention "--\n-- Database \"%s\" dump\n--\n\n"
the attached minor patch fixes this issue.
Attachments:
v20_refactor_restore_comments.nocfbotapplication/octet-stream; name=v20_refactor_restore_comments.nocfbotDownload
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 0752c44896f..210aac1e040 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -595,7 +595,11 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+ else
+ fprintf(OPF, "--\n-- PostgreSQL global objects dump\n--\n\n");
+
if (verbose)
dumpTimestamp("Started on");
@@ -666,7 +670,11 @@ main(int argc, char *argv[])
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+ else
+ fprintf(OPF, "--\n-- PostgreSQL global objects dump complete\n--\n\n");
if (filename)
{
@@ -1619,7 +1627,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Databases\n--\n\n");
/*
@@ -1677,7 +1685,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
hi.
some documentation issue:
doc/src/sgml/ref/pg_dumpall.sgml
<variablelist>
<varlistentry>
<term><literal>d</literal></term>
<term><literal>directory</literal></term>
<listitem>
<para>
Output a directory-format archive suitable for input into
pg_restore. Under dboid
subdirectory, this will create a directory with one file
for each table and large
object being dumped, plus a so-called Table of Contents
file describing the dumped
objects in a machine-readable format that pg_restore can
read. A directory format
archive can be manipulated with standard Unix tools; for
example, files in an
uncompressed archive can be compressed with the gzip, lz4,
or zstd tools. This
format is compressed by default using gzip and also
supports parallel dumps.
</para>
</listitem>
</varlistentry>
with the v20 implementation,
"""
For example, files in an
uncompressed archive can be compressed with the gzip, lz4,
or zstd tools. This
format is compressed by default using gzip and also
supports parallel dumps.
""
Is this part is wrong?
I think, currently, by default the pg_dumpall directory will use gzip
compress level=-1 to do the compression.
and pg_dumpall format==directory does not support parallel dumps.
-------------------
by default, this is plain format. If non-plain mode is passed, then global.dat
(global sql commands) and map.dat(dboid and dbnames list of all the databases)
files will be created. Apart from these files, one subdirectory with databases
name will be created. Under this databases subdirectory, there will be files
with dboid name for each database and if --format is directory, then toc.dat and
other dump files will be under dboid subdirectory.
-------------------
I think the above message changes to the below, the message is more clear, IMHO.
By default, this uses plain format. If a non-plain mode is specified, two files
will be created: **global.dat** (containing SQL commands for global objects) and
**map.dat** (listing database OIDs and names for all databases). Additionally, a
subdirectory named after each database OID will be created.
If the --format option is set to **directory**, then **toc.dat** and
other dump files
will be stored within the corresponding database Oid subdirectory.
---------------------
doc/src/sgml/ref/pg_restore.sgml
<term><option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option></term>
we can add:
When emitting a script, this option is not supported for
wild-card matching,
the excluded database must exactly match the literal
<replaceable class="parameter">pattern</replaceable> string.
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";
$ pg_dumpall -Fc --file test
shell command argument contains a newline or carriage return: " dbname='foo
bar'"
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";$ pg_dumpall -Fc --file test
shell command argument contains a newline or carriage return: "
dbname='foo
bar'"
--
Álvaro Herrera 48°01'N 7°57'E —
Hi Alvaro,
I also reported this issue on 29-01-2025. This breaks even without this
patch also.
error with pg_dumpall when db name have new line in double quote
</messages/by-id/CAFC+b6qwc+wpt7_b2R6YhpDkrXeFvFd5NoLbTMMoxX9tfOHjpg@mail.gmail.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Disclaimer: I didn't review these patches fully.
On 2025-Mar-05, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";
I also reported this issue on 29-01-2025. This breaks even without this
patch also.
Okay, we should probably fix that, but I think the new map.dat file your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it not
fail. I think it would be good to avoid digging us up even deeper in
that hole. More generally, the pg_upgrade tests contain some code to
try database names with almost all possible ascii characters (see
generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to
ensure that this new functionality also works correctly for that --
perhaps add an equivalent test to the pg_dumpall test suite.
Looking at 0001:
I'm not sure that the whole common_dumpall_restore.c thing is properly
structured. First, the file name shouldn't presume which programs
exactly are going to use the funcionality there. Second, it looks like
there's another PQconnectdbParams() in pg_backup_db.c and I don't
understand what the reason is for that one to be separate. In my mind,
there should be a file maybe called connection.c or connectdb.c or
whatever that's in charge of establishing connection for all the
src/bin/pg_dump programs, for cleanliness sake. (This is probably also
the place where to put an on_exit callback that cleans up any leftover
connections.)
Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for
documentation. No such macro exists. But also I think the addition
(and use) of reset_exit_nicely_list() is not a good idea. It seems to
assume that the only entries in that list are ones that can be cleared
and reinstated whenever. This makes too much of an assumption about how
the program works. It may work today, but it'll get in the way of any
other patch that wants to set up some different on-exit clean up. In
other words, we shouldn't reset the on_exit list at all. Also, this is
just a weird addition:
#define exit_nicely(code) exit(code)
You added "A" as an option to the getopt_long() call in pg_restore, but
no handling for it is added.
I think the --globals-only option to pg_restore should be a separate
commit.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Thanks Alvaro for feedback and review.
On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Disclaimer: I didn't review these patches fully.
On 2025-Mar-05, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";I also reported this issue on 29-01-2025. This breaks even without this
patch also.Okay, we should probably fix that, but I think the new map.dat file your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it not
fail. I think it would be good to avoid digging us up even deeper in
that hole. More generally, the pg_upgrade tests contain some code to
try database names with almost all possible ascii characters (see
generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to
ensure that this new functionality also works correctly for that --
perhaps add an equivalent test to the pg_dumpall test suite.
In the attached patch, I tried to solve the problem of the map.dat
file. I will do more analysis based on dbnames in 002_pg_upgrade.pl
file.
Looking at 0001:
I'm not sure that the whole common_dumpall_restore.c thing is properly
structured. First, the file name shouldn't presume which programs
exactly are going to use the funcionality there. Second, it looks like
there's another PQconnectdbParams() in pg_backup_db.c and I don't
understand what the reason is for that one to be separate. In my mind,
there should be a file maybe called connection.c or connectdb.c or
whatever that's in charge of establishing connection for all the
src/bin/pg_dump programs, for cleanliness sake. (This is probably also
the place where to put an on_exit callback that cleans up any leftover
connections.)
Okay. I will do these changes.
Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for
documentation. No such macro exists. But also I think the addition
(and use) of reset_exit_nicely_list() is not a good idea. It seems to
assume that the only entries in that list are ones that can be cleared
and reinstated whenever. This makes too much of an assumption about how
the program works. It may work today, but it'll get in the way of any
other patch that wants to set up some different on-exit clean up. In
other words, we shouldn't reset the on_exit list at all. Also, this is
just a weird addition:
I will do more study for this case and will update here.
#define exit_nicely(code) exit(code)
Okay. I will fix this.
You added "A" as an option to the getopt_long() call in pg_restore, but
no handling for it is added.
Fixed.
I think the --globals-only option to pg_restore should be a separate
commit.
Okay.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Here, I am attaching updated patches for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v21_0002_pg_dumpall-with-non-text_format-5th_mar.patchapplication/octet-stream; name=v21_0002_pg_dumpall-with-non-text_format-5th_mar.patchDownload
From 950daf288c13b889c67118770373f15fced603c7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 5 Mar 2025 22:10:22 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore.
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 23 +-
src/bin/pg_dump/pg_backup_utils.h | 1 +
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 290 +++++++--
src/bin/pg_dump/pg_restore.c | 850 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1249 insertions(+), 78 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index c2fa5be9519..c36802e06fd 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 199ea3345f3..46bdbc092c3 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 86006d111c3..a4e557d62c7 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -47,8 +47,8 @@ all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dump.o common.o pg_dump_sort.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_restore: pg_restore.o common_dumpall_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_restore.o common_dumpall_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 97dbfaeb67f..ddecac5cf09 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -69,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index e783cc68d89..c7ced7add1b 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -309,7 +309,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 632077113a4..fb0711527bf 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -336,9 +336,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -455,7 +460,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1291,7 +1296,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1670,7 +1675,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1690,7 +1696,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..0f6ad8b21d8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -91,11 +91,7 @@ on_exit_nicely(on_exit_nicely_callback function, void *arg)
void
exit_nicely(int code)
{
- int i;
-
- for (i = on_exit_nicely_index - 1; i >= 0; i--)
- on_exit_nicely_list[i].function(code,
- on_exit_nicely_list[i].arg);
+ reset_exit_nicely_list(code);
#ifdef WIN32
if (parallel_init_done && GetCurrentThreadId() != mainThreadId)
@@ -104,3 +100,20 @@ exit_nicely(int code)
exit(code);
}
+
+/*
+ * reset_exit_nicely_list
+ *
+ * cleans index of exit_nicely list.
+ */
+void
+reset_exit_nicely_list(int code)
+{
+ int i;
+
+ for (i = on_exit_nicely_index - 1; i >= 0; i--)
+ on_exit_nicely_list[i].function(code,
+ on_exit_nicely_list[i].arg);
+
+ on_exit_nicely_index = 0;
+}
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index 38551944513..c2249a88185 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -30,6 +30,7 @@ extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
extern void exit_nicely(int code) pg_attribute_noreturn();
+extern void reset_exit_nicely_list(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
#undef pg_fatal
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4f4ad2ee150..b317d6d7122 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1189,7 +1189,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 1a0c1bbeae3..cd6234209d6 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -25,14 +26,16 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
+#define exit_nicely(code) exit(code)
typedef struct
{
@@ -65,9 +68,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +80,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
@@ -105,7 +111,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
static int statistics_only = 0;
@@ -120,8 +126,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -191,6 +196,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -240,7 +247,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -268,7 +275,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -417,6 +426,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -471,6 +495,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -510,19 +561,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -622,7 +660,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -635,7 +673,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -646,12 +684,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1554,10 +1594,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1571,7 +1614,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1579,9 +1622,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1596,6 +1663,22 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /*
+ * Put two line entry for dboid and dbname in map file. First line
+ * will have dboid and number of characters in dbname and second
+ * line will have actual dbname.
+ */
+ fprintf(map_file, "%s %u\n%s\n", oid, strlen(dbname), pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1614,9 +1697,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1625,19 +1716,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1647,7 +1749,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1656,17 +1759,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1811,3 +1933,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 13e4dc507e0..de0f31cf693 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,76 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -86,6 +132,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -136,6 +183,7 @@ main(int argc, char **argv)
{"no-statistics", no_argument, &no_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -164,7 +212,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -191,11 +239,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -310,6 +361,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -337,6 +392,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -417,6 +479,106 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = connectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -446,25 +608,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -482,6 +641,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -494,6 +654,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -530,8 +691,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -636,3 +797,662 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ size_t db_name_size = 0;
+ char dbname[MAXPGPATH + 1] = {'\0'};
+ int i = 0;
+ char ch;
+ int kk = 1;
+ while(kk);
+
+ /* Extract dboid and size of dbname. */
+ sscanf(line, "%u %u" , &db_oid, &db_name_size);
+
+ /* Now copy dbname. */
+ while (i < db_name_size)
+ dbname[i++] = fgetc(pfile);
+
+ ch = fgetc(pfile);
+
+ if (ch != '\n')
+ pg_fatal("invalid entry in map.dat file at line : %d", 2 * count + 2);
+
+ /* Add \0 in the end. */
+ dbname[i] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", 2 * count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = connectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+
+ /*
+ * We need to reset on_exit_nicely_index with each database so that we can restore
+ * multiple databases by archive.
+ */
+ if (dboid_cell != NULL)
+ reset_exit_nicely_list(n_errors ? 1 : 0);
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9840060997f..b43a3e48e3b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2695,6 +2695,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
v21_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v21_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From 77dad68de0e59ca7489da6e695f5436759ba17ef Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 20 Feb 2025 16:27:17 +0530
Subject: [PATCH 1/2] move common code of pg_dumpall and pg_restore to new file
connectDatabase is used by both pg_dumpall and pg_restore so
move common code to new file.
---
src/bin/pg_dump/Makefile | 4 +-
src/bin/pg_dump/common_dumpall_restore.c | 289 +++++++++++++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 24 ++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_dumpall.c | 268 +--------------------
5 files changed, 324 insertions(+), 262 deletions(-)
create mode 100644 src/bin/pg_dump/common_dumpall_restore.c
create mode 100644 src/bin/pg_dump/common_dumpall_restore.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..86006d111c3 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -50,8 +50,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o common_dumpall_restore.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
new file mode 100644
index 00000000000..92f52b7239a
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -0,0 +1,289 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.c
+ * This is a common file for pg_dumpall and pg_restore.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "common_dumpall_restore.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+#define exit_nicely(code) exit(code)
+
+/*
+ * connectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+connectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ static char *password = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 6;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in the
+ * form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
new file mode 100644
index 00000000000..7fe1c00ab71
--- /dev/null
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * common_dumpall_restore.h
+ * Common header file for pg_dumpall and pg_restore
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/common_dumpall_restore.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef COMMON_DUMPALL_RESTORE_H
+#define COMMON_DUMPALL_RESTORE_H
+
+#include "pg_backup.h"
+
+extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..97dbfaeb67f 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -49,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'common_dumpall_restore.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e0867242526..1a0c1bbeae3 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -20,6 +20,7 @@
#include "catalog/pg_authid_d.h"
#include "common/connect.h"
+#include "common_dumpall_restore.h"
#include "common/file_utils.h"
#include "common/hashfn_unstable.h"
#include "common/logging.h"
@@ -71,12 +72,6 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
@@ -85,7 +80,7 @@ static void read_dumpall_filters(const char *filename, SimpleStringList *pattern
static char pg_dump_bin[MAXPGPATH];
static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -484,7 +479,8 @@ main(int argc, char *argv[])
if (pgdb)
{
conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
@@ -492,10 +488,12 @@ main(int argc, char *argv[])
else
{
conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ prompt_password, false,
+ progname, &connstr, &server_version);
if (!conn)
conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ prompt_password, true,
+ progname, &connstr, &server_version);
if (!conn)
{
@@ -1718,256 +1716,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
On Thu, Mar 6, 2025 at 12:49 AM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:
Thanks Alvaro for feedback and review.
On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Disclaimer: I didn't review these patches fully.
On 2025-Mar-05, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";I also reported this issue on 29-01-2025. This breaks even without this
patch also.Okay, we should probably fix that, but I think the new map.dat file your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it not
fail. I think it would be good to avoid digging us up even deeper in
that hole. More generally, the pg_upgrade tests contain some code to
try database names with almost all possible ascii characters (see
generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to
ensure that this new functionality also works correctly for that --
perhaps add an equivalent test to the pg_dumpall test suite.In the attached patch, I tried to solve the problem of the map.dat
file. I will do more analysis based on dbnames in 002_pg_upgrade.pl
file.
hi.
/*
* Append the given string to the shell command being built in the buffer,
* with shell-style quoting as needed to create exactly one argument.
*
* Forbid LF or CR characters, which have scant practical use beyond designing
* security breaches. The Windows command shell is unusable as a conduit for
* arguments containing LF or CR characters. A future major release should
* reject those characters in CREATE ROLE and CREATE DATABASE, because use
* there eventually leads to errors here.
*
* appendShellString() simply prints an error and dies if LF or CR appears.
* appendShellStringNoError() omits those characters from the result, and
* returns false if there were any.
*/
void
appendShellString(PQExpBuffer buf, const char *str)
per above comments,
we need to disallow LF/CR in database name and role name when issuing
shell command.
rolename LF/CR issue already being handled in
src/bin/pg_dump/pg_dumpall.c: while(getopt_long) code:
case 3:
use_role = pg_strdup(optarg);
appendPQExpBufferStr(pgdumpopts, " --role ");
appendShellString(pgdumpopts, use_role);
we can fail earlier also for database names in dumpDatabases, right
after executeQuery.
Please check attached, which is based on *v20*.
in V21, src/bin/pg_dump/pg_dumpall.c:
+#include "common_dumpall_restore.h"
happened within v21-0001 and v21-0002, it is being included twice.
Attachments:
v20-0001-pg_dumpall-deal-witth-newline-or-carriage-ret.no-cfbotapplication/octet-stream; name=v20-0001-pg_dumpall-deal-witth-newline-or-carriage-ret.no-cfbotDownload
From bf0db3afb9cb7d34033fe500315be8033efff49e Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Mon, 10 Mar 2025 15:27:32 +0800
Subject: [PATCH v20 1/1] pg_dumpall deal witth newline or carriage return
pg_dumpall: fail earlier if any database name contain new line.
we may also need deal with role name have newline or carriage return.
also see comments in appendShellString.
---
src/bin/pg_dump/common_dumpall_restore.c | 18 ++++++++++++++++++
src/bin/pg_dump/common_dumpall_restore.h | 2 ++
src/bin/pg_dump/pg_dumpall.c | 13 +++++++++++++
3 files changed, 33 insertions(+)
diff --git a/src/bin/pg_dump/common_dumpall_restore.c b/src/bin/pg_dump/common_dumpall_restore.c
index 92f52b7239a..4e81142373f 100644
--- a/src/bin/pg_dump/common_dumpall_restore.c
+++ b/src/bin/pg_dump/common_dumpall_restore.c
@@ -287,3 +287,21 @@ executeQuery(PGconn *conn, const char *query)
return res;
}
+
+/*
+ * append str to buf, exit if string contain newline or carriage return
+*/
+void
+string_contain_lfcr(PQExpBuffer buf, const char *str, const char *kind)
+{
+ Assert(kind != NULL);
+ if (!appendShellStringNoError(buf, str))
+ {
+
+ pg_log_error("%s contains a newline or carriage return: \"%s\"", kind, str);
+ pg_log_error_hint("If you want to dump data on \"%s\", "
+ "you may need rename it to make sure it does contain newline or carriage return",
+ str);
+ exit_nicely(1);
+ }
+}
diff --git a/src/bin/pg_dump/common_dumpall_restore.h b/src/bin/pg_dump/common_dumpall_restore.h
index 7fe1c00ab71..c458abd0eef 100644
--- a/src/bin/pg_dump/common_dumpall_restore.h
+++ b/src/bin/pg_dump/common_dumpall_restore.h
@@ -15,10 +15,12 @@
#define COMMON_DUMPALL_RESTORE_H
#include "pg_backup.h"
+#include "pqexpbuffer.h"
extern PGconn *connectDatabase(const char *dbname, const char *connection_string, const char *pghost,
const char *pgport, const char *pguser,
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version);
extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern void string_contain_lfcr(PQExpBuffer buf, const char *str, const char *kind);
#endif /* COMMON_DUMPALL_RESTORE_H */
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 0752c44896f..f6614ce3bc2 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1601,6 +1601,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
char db_subdir[MAXPGPATH];
char dbfilepath[MAXPGPATH];
FILE *map_file = NULL;
+ PQExpBufferData test_dbname;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1622,6 +1623,18 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * exit earlier if database name contain newline or carriage return.
+ * also see appendShellString comments.
+ */
+ initPQExpBuffer(&test_dbname);
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ char *dbname = PQgetvalue(res, i, 0);
+ string_contain_lfcr(&test_dbname, dbname, "database name");
+ }
+ termPQExpBuffer(&test_dbname);
+
/*
* If directory/tar/custom format is specified then create a subdirectory
* under the main directory and each database dump file subdirectory will
--
2.34.1
Thanks Alvaro and Jian for the review and feedback.
On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Disclaimer: I didn't review these patches fully.
On 2025-Mar-05, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
A database name containing a newline breaks things for this patch:
CREATE DATABASE "foo
bar";I also reported this issue on 29-01-2025. This breaks even without this
patch also.Okay, we should probably fix that, but I think the new map.dat file your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it not
fail. I think it would be good to avoid digging us up even deeper in
that hole. More generally, the pg_upgrade tests contain some code to
try database names with almost all possible ascii characters (see
generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to
ensure that this new functionality also works correctly for that --
perhaps add an equivalent test to the pg_dumpall test suite.
As Jian also pointed out, we should not allow \n\r in dbnames. I am
keeping dbanames as single line names only.
I am doing testing using the pg_upgrade/t/002_pg_upgrade.pl file to
check different-2 dbnames.
Looking at 0001:
I'm not sure that the whole common_dumpall_restore.c thing is properly
structured. First, the file name shouldn't presume which programs
exactly are going to use the funcionality there. Second, it looks like
there's another PQconnectdbParams() in pg_backup_db.c and I don't
understand what the reason is for that one to be separate. In my mind,
there should be a file maybe called connection.c or connectdb.c or
whatever that's in charge of establishing connection for all the
src/bin/pg_dump programs, for cleanliness sake. (This is probably also
the place where to put an on_exit callback that cleans up any leftover
connections.)
I did some more refactoring and made a connectdb.c file.
Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for
documentation. No such macro exists. But also I think the addition
(and use) of reset_exit_nicely_list() is not a good idea. It seems to
assume that the only entries in that list are ones that can be cleared
and reinstated whenever. This makes too much of an assumption about how
the program works. It may work today, but it'll get in the way of any
other patch that wants to set up some different on-exit clean up. In
other words, we shouldn't reset the on_exit list at all. Also, this is
just a weird addition:
Based on some discussions, I added handling for cleanup. for 1st
database, I am saving index of array and then I am using same index
for rest of the databases as we are closing archive file in
CloseArchive so we can use same index for next database.
#define exit_nicely(code) exit(code)
Fixed.
You added "A" as an option to the getopt_long() call in pg_restore, but
no handling for it is added.
Fixed.
I think the --globals-only option to pg_restore should be a separate
commit.
I will make this in the next version.
Here, I am attaching updated patches for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v22_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v22_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From 63efb00ca40d87e853da6266d536563b0caef7f6 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 11 Mar 2025 18:40:00 +0530
Subject: [PATCH 1/2] move common code related to connection to new the file
ConnectDatabase is used by both pg_dumpall, pg_restore
and pg_dump so move common code to new file.
new file name: connectdb.c
---
src/bin/pg_dump/Makefile | 9 +-
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 75 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 279 ++-------------------------
7 files changed, 32 insertions(+), 343 deletions(-)
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..c488ab4aecf 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -40,7 +40,8 @@ OBJS = \
pg_backup_directory.o \
pg_backup_null.o \
pg_backup_tar.o \
- pg_backup_utils.o
+ pg_backup_utils.o \
+ connectdb.o
all: pg_dump pg_restore pg_dumpall
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o connectdb.o pg_backup_utils.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o connectdb.o pg_backup_utils.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
@@ -71,5 +72,5 @@ uninstall:
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean:
- rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o
+ rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o connectdb.o
rm -rf tmp_check
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..d5f805fb511 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -30,6 +30,7 @@ pg_dump_sources = files(
'common.c',
'pg_dump.c',
'pg_dump_sort.c',
+ 'connectdb.c',
)
if host_system == 'windows'
@@ -49,6 +50,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'connectdb.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index e783cc68d89..731cb2d19fb 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -291,7 +291,7 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
+extern void ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 7480e122b61..12f3f39e39b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -413,7 +413,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4430,7 +4430,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5047,7 +5047,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..227dd963984 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
+ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c371570501a..6bb54c1a2b4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -934,7 +934,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e0867242526..e7e492afa28 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,15 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
+const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -125,8 +119,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -483,19 +475,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1718,256 +1713,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v22_0002_pg_dumpall-with-non-text_format-11th_march.patchapplication/octet-stream; name=v22_0002_pg_dumpall-with-non-text_format-11th_march.patchDownload
From 7ad1ff448ebc60bcc2035bc81a217e2eb01ce20b Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 11 Mar 2025 19:29:15 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/meson.build | 3 +-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 280 +++++++--
src/bin/pg_dump/pg_restore.c | 847 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1248 insertions(+), 79 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index c2fa5be9519..c36802e06fd 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 199ea3345f3..46bdbc092c3 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index d5f805fb511..dc1ed410838 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -16,6 +16,7 @@ pg_dump_common_sources = files(
'pg_backup_null.c',
'pg_backup_tar.c',
'pg_backup_utils.c',
+ 'connectdb.c',
)
pg_dump_common = static_library('libpgdump_common',
@@ -30,7 +31,6 @@ pg_dump_sources = files(
'common.c',
'pg_dump.c',
'pg_dump_sort.c',
- 'connectdb.c',
)
if host_system == 'windows'
@@ -70,6 +70,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'connectdb.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 731cb2d19fb..41ee305850a 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -309,7 +309,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 12f3f39e39b..ad2a62a2c9c 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -336,9 +336,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -455,7 +460,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1291,7 +1296,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1670,7 +1675,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1690,7 +1696,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ae433132435 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index 38551944513..57f3197f103 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
extern void exit_nicely(int code) pg_attribute_noreturn();
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6bb54c1a2b4..5b6d4364eb3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1186,7 +1186,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e7e492afa28..ee7c62da09d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
const char *progname;
@@ -104,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
static int statistics_only = 0;
@@ -143,6 +147,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +193,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +244,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +272,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +423,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -468,6 +492,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -507,19 +558,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -619,7 +657,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -632,7 +670,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -643,12 +681,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1551,10 +1591,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1568,7 +1611,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1576,9 +1619,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1593,6 +1660,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1611,9 +1690,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1622,19 +1709,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1644,7 +1742,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1653,17 +1752,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1808,3 +1926,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 13e4dc507e0..9b802e7a6bd 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,77 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -86,6 +133,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -136,6 +184,7 @@ main(int argc, char **argv)
{"no-statistics", no_argument, &no_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -164,7 +213,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -191,11 +240,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -310,6 +362,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -337,6 +393,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -417,6 +480,108 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -425,8 +590,14 @@ main(int argc, char **argv)
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op.
+ * If we are restoring multiple databases, then save index of exit_nicely
+ * so that we can use same slot for all the databases as we already closed
+ * the previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -446,25 +617,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -482,6 +650,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -494,6 +663,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -530,8 +700,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -636,3 +806,648 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+ count++;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9840060997f..b43a3e48e3b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2695,6 +2695,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
On 2025-Mar-11, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Okay, we should probably fix that, but I think the new map.dat file your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it not
fail.As Jian also pointed out, we should not allow \n\r in dbnames. I am
keeping dbanames as single line names only.
Ehm, did you get consensus on adding such a restriction?
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
On Tue, 11 Mar 2025 at 20:12, Álvaro Herrera <alvherre@alvh.no-ip.org>
wrote:
On 2025-Mar-11, Mahendra Singh Thalor wrote:
On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org>
wrote:
Okay, we should probably fix that, but I think the new map.dat file
your
patch adds is going to make the problem worse, because it doesn't look
like you handled that case in any particular way that would make it
not
fail.
As Jian also pointed out, we should not allow \n\r in dbnames. I am
keeping dbanames as single line names only.Ehm, did you get consensus on adding such a restriction?
Hi Alvaro,
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.
/*
* Append the given string to the shell command being built in the buffer,
* with shell-style quoting as needed to create exactly one argument.
*
* Forbid LF or CR characters, which have scant practical use beyond
designing
* security breaches. The Windows command shell is unusable as a conduit
for
* arguments containing LF or CR characters. A future major release should
* reject those characters in CREATE ROLE and CREATE DATABASE, because use
* there eventually leads to errors here.
*
* appendShellString() simply prints an error and dies if LF or CR appears.
* appendShellStringNoError() omits those characters from the result, and
* returns false if there were any.
*/
void
appendShellString(PQExpBuffer buf, const char *str)
Sorry, in the v22 patches, I missed to use the "git add connectdb.c" file.
(Thanks Andrew for reporting this offline)
Here, I am attaching updated patches for review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v23_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v23_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From f61af4fcf612e0b811824c72d35e1bcdb6eb8ae6 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 11 Mar 2025 20:54:33 +0530
Subject: [PATCH 1/2] move common code related to connection to new the file
ConnectDatabase is used by both pg_dumpall, pg_restore
and pg_dump so move common code to new file.
new file name: connectdb.c
---
src/bin/pg_dump/Makefile | 9 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 2 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 75 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 279 ++-----------------------
9 files changed, 352 insertions(+), 343 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..c488ab4aecf 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -40,7 +40,8 @@ OBJS = \
pg_backup_directory.o \
pg_backup_null.o \
pg_backup_tar.o \
- pg_backup_utils.o
+ pg_backup_utils.o \
+ connectdb.o
all: pg_dump pg_restore pg_dumpall
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o dumputils.o filter.o connectdb.o pg_backup_utils.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o connectdb.o pg_backup_utils.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
@@ -71,5 +72,5 @@ uninstall:
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean:
- rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o
+ rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o connectdb.o
rm -rf tmp_check
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..3e1fbe98c25
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in the
+ * form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..9e1e7ef33d0
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..d5f805fb511 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -30,6 +30,7 @@ pg_dump_sources = files(
'common.c',
'pg_dump.c',
'pg_dump_sort.c',
+ 'connectdb.c',
)
if host_system == 'windows'
@@ -49,6 +50,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
'pg_dumpall.c',
+ 'connectdb.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index e783cc68d89..731cb2d19fb 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -291,7 +291,7 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
+extern void ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 7480e122b61..12f3f39e39b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -413,7 +413,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4430,7 +4430,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5047,7 +5047,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..227dd963984 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
+ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c371570501a..6bb54c1a2b4 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -934,7 +934,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e0867242526..e7e492afa28 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,15 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
+const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -125,8 +119,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -483,19 +475,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1718,256 +1713,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v23_0002_pg_dumpall-with-non-text_format-11th_march.patchapplication/octet-stream; name=v23_0002_pg_dumpall-with-non-text_format-11th_march.patchDownload
From bc18451be43867723959f20f7192007342964393 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 11 Mar 2025 19:29:15 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/meson.build | 3 +-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 280 +++++++--
src/bin/pg_dump/pg_restore.c | 847 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
15 files changed, 1248 insertions(+), 79 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index c2fa5be9519..c36802e06fd 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 199ea3345f3..46bdbc092c3 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index d5f805fb511..dc1ed410838 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -16,6 +16,7 @@ pg_dump_common_sources = files(
'pg_backup_null.c',
'pg_backup_tar.c',
'pg_backup_utils.c',
+ 'connectdb.c',
)
pg_dump_common = static_library('libpgdump_common',
@@ -30,7 +31,6 @@ pg_dump_sources = files(
'common.c',
'pg_dump.c',
'pg_dump_sort.c',
- 'connectdb.c',
)
if host_system == 'windows'
@@ -70,6 +70,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
'pg_restore.c',
+ 'connectdb.c',
)
if host_system == 'windows'
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 731cb2d19fb..41ee305850a 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -309,7 +309,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 12f3f39e39b..ad2a62a2c9c 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -336,9 +336,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -455,7 +460,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1291,7 +1296,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1670,7 +1675,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1690,7 +1696,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ae433132435 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index 38551944513..57f3197f103 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
extern void exit_nicely(int code) pg_attribute_noreturn();
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6bb54c1a2b4..5b6d4364eb3 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1186,7 +1186,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e7e492afa28..ee7c62da09d 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
const char *progname;
@@ -104,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
static int statistics_only = 0;
@@ -143,6 +147,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -188,6 +193,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -237,7 +244,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -265,7 +272,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -414,6 +423,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -468,6 +492,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -507,19 +558,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -619,7 +657,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -632,7 +670,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -643,12 +681,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1551,10 +1591,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1568,7 +1611,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1576,9 +1619,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1593,6 +1660,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1611,9 +1690,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1622,19 +1709,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1644,7 +1742,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1653,17 +1752,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1808,3 +1926,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 13e4dc507e0..9b802e7a6bd 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,77 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -86,6 +133,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -136,6 +184,7 @@ main(int argc, char **argv)
{"no-statistics", no_argument, &no_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -164,7 +213,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -191,11 +240,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -310,6 +362,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -337,6 +393,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -417,6 +480,108 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -425,8 +590,14 @@ main(int argc, char **argv)
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op.
+ * If we are restoring multiple databases, then save index of exit_nicely
+ * so that we can use same slot for all the databases as we already closed
+ * the previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -446,25 +617,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -482,6 +650,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -494,6 +663,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -530,8 +700,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -636,3 +806,648 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+ count++;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index dfe2690bdd3..a922d983514 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2697,6 +2697,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.
Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.
Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Álvaro Herrera <alvherre@alvh.no-ip.org> writes:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.
I haven't looked at the code for this, but why are we inventing an
ad-hoc file format? Why not use JSON, like we do for backup manifests?
Then storing arbitrary database names won't be a problem.
- ilmari
On 2025-03-11 Tu 1:52 PM, Dagfinn Ilmari Mannsåker wrote:
Álvaro Herrera <alvherre@alvh.no-ip.org> writes:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I haven't looked at the code for this, but why are we inventing an
ad-hoc file format? Why not use JSON, like we do for backup manifests?
Then storing arbitrary database names won't be a problem.
I'm not sure everyone thinks that was a good idea for backup manifests
(in fact I know some don't), and it seems somewhat like overkill for a
simple map of oids to database names.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2025-Mar-11, Andrew Dunstan wrote:
I'm not sure everyone thinks that was a good idea for backup manifests (in
fact I know some don't), and it seems somewhat like overkill for a simple
map of oids to database names.
If such a simple system can be made to work for all possible valid
database names, then I agree with you. But if it forces us to restrict
database names to not contain newlines or other funny chars that are so
far unrestricted, then I would take the other position.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"La victoria es para quien se atreve a estar solo"
On 2025-03-11 Tu 5:03 PM, Álvaro Herrera wrote:
On 2025-Mar-11, Andrew Dunstan wrote:
I'm not sure everyone thinks that was a good idea for backup manifests (in
fact I know some don't), and it seems somewhat like overkill for a simple
map of oids to database names.If such a simple system can be made to work for all possible valid
database names, then I agree with you. But if it forces us to restrict
database names to not contain newlines or other funny chars that are so
far unrestricted, then I would take the other position.
Well, JSON is supposed to be UTF8. What should we do about database
names that are not UTF8?
It's kinda tempting to say we should have the file consist of lines like:
oid base64_encoded_name escaped_human_readable name
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Tue, 11 Mar 2025 at 18:37, Andrew Dunstan <andrew@dunslane.net> wrote:
Well, JSON is supposed to be UTF8. What should we do about database
names that are not UTF8?
How can you have a database name that isn't encodeable in UTF-8? At this
point I'm pretty sure Unicode has subsumed essentially every character ever
mentioned in a standards document.
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.
I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster databases data.
if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?
am i missing something?
On Tue, 2025-03-11 at 19:14 -0400, Isaac Morland wrote:
On Tue, 11 Mar 2025 at 18:37, Andrew Dunstan <andrew@dunslane.net> wrote:
Well, JSON is supposed to be UTF8. What should we do about database
names that are not UTF8?How can you have a database name that isn't encodeable in UTF-8? At this point
I'm pretty sure Unicode has subsumed essentially every character ever mentioned
in a standards document.
There is a difference between "encodable" and "encoded". You'd have to figure
out the actual encoding of the database name and convert that to UTF-8.
Yours,
Laurenz Albe
On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.
Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v24_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchapplication/octet-stream; name=v24_0001_move-common-code-of-pg_dumpall-and-pg_restore-to-new_file.patchDownload
From 4877f3617511d245edf0012dc8ae828dd8a595e3 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH 1/2] move common code related to connection to new the file
ConnectDatabase is used by both pg_dumpall, pg_restore
and pg_dump so move common code to new file.
new file name: connectdb.c
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 3 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 75 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 350 insertions(+), 341 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..3e1fbe98c25
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in the
+ * form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..9e1e7ef33d0
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..9031737d013 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
@@ -48,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
+ 'connectdb.c',
'pg_dumpall.c',
)
@@ -67,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
+ 'connectdb.c',
'pg_restore.c',
)
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..c68a21027fa 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,7 +293,7 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
+extern void ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 82d51c89ac6..3fd2818223c 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -414,7 +414,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4437,7 +4437,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5054,7 +5054,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..227dd963984 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
+ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 428ed2d60fc..f81667403dc 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -935,7 +935,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2935cac2c46..455103e38bc 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -126,8 +119,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -487,19 +478,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1723,256 +1717,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v24_0002_pg_dumpall-with-non-text_format-18th_march.patchapplication/octet-stream; name=v24_0002_pg_dumpall-with-non-text_format-18th_march.patchDownload
From 2138efb24a1c84d182b3ef4ce78ae620d6cef56f Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:30:12 +0530
Subject: [PATCH 2/2] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 80 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 280 +++++++--
src/bin/pg_dump/pg_restore.c | 847 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1246 insertions(+), 78 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index ae5afb3c7d5..c38906a1aac 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,7 +121,83 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- </para>
+ Note: This option can be omitted only when <option>--format</option> is plain
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 35140187807..4cf46ea9333 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index c68a21027fa..89459dedc4b 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3fd2818223c..e22a8810b45 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -337,9 +337,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -456,7 +461,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1292,7 +1297,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1671,7 +1676,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1691,7 +1697,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ae433132435 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index ba042016879..bbefdc112f5 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
pg_noreturn extern void exit_nicely(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index f81667403dc..76f74ba1666 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1188,7 +1188,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 455103e38bc..1a57077986b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -104,7 +108,7 @@ static int no_subscriptions = 0;
static int no_toast_compression = 0;
static int no_unlogged_table_data = 0;
static int no_role_passwords = 0;
-static int server_version;
+static int server_version;
static int load_via_partition_root = 0;
static int on_conflict_do_nothing = 0;
static int statistics_only = 0;
@@ -143,6 +147,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -189,6 +194,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -238,7 +245,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -266,7 +273,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -415,6 +424,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -471,6 +495,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -510,19 +561,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -622,7 +660,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -635,7 +673,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -646,12 +684,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1555,10 +1595,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1572,7 +1615,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1580,9 +1623,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1597,6 +1664,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1615,9 +1694,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1626,19 +1713,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1648,7 +1746,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1657,17 +1756,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1812,3 +1930,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 337e64a8a29..ef164fbe4bb 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,77 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -87,6 +134,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -138,6 +186,7 @@ main(int argc, char **argv)
{"no-statistics", no_argument, &no_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -166,7 +215,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -193,11 +242,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -312,6 +364,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -339,6 +395,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -420,6 +483,108 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -428,8 +593,14 @@ main(int argc, char **argv)
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op.
+ * If we are restoring multiple databases, then save index of exit_nicely
+ * so that we can use same slot for all the databases as we already closed
+ * the previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -449,25 +620,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -485,6 +653,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -497,6 +666,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -534,8 +704,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -640,3 +810,648 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+ count++;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index c04a47cf222..be75739e995 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2722,6 +2722,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of characters
in dbname but as per code comments, as of now, we are not supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the contents
of map.dat as a shell string. After all, you're not going to _execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.
I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's where
all the similar stuff belongs, and it feels strange to have this inline
in pg_restore.c. (I also don't like the name much - SimpleOidStringList
or maybe SimpleOidPlusStringList might be better).
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
dumpall_cleanup.patch-nocitext/plain; charset=UTF-8; name=dumpall_cleanup.patch-nociDownload
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index a3dcc585ace..6aab1bfe831 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -434,13 +434,13 @@ main(int argc, char *argv[])
archDumpFormat = parseDumpFormat(formatName);
/*
- * If non-plain format is specified then we must provide the
- * file name to create one main directory.
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
*/
if (archDumpFormat != archNull &&
(!filename || strcmp(filename, "") == 0))
{
- pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
@@ -513,14 +513,14 @@ main(int argc, char *argv[])
*/
if (archDumpFormat != archNull)
{
- char toc_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
/* Create new directory or accept the empty existing directory. */
create_or_open_dir(filename);
- snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
- OPF = fopen(toc_path, PG_BINARY_W);
+ OPF = fopen(global_path, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open global.dat file: %s", strerror(errno));
}
@@ -1680,7 +1680,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
}
/*
- * If this is non-plain dump format, then append dboid and dbname to
+ * If this is not a plain format dump, then append dboid and dbname to
* the map.dat file.
*/
if (archDumpFormat != archNull)
@@ -1688,7 +1688,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
/* Put one line entry for dboid and dbname in map file. */
- fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ fprintf(map_file, "%s %s\n", oid, dbname);
}
pg_log_info("dumping database \"%s\"", dbname);
@@ -1734,17 +1734,17 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
if (filename)
{
- char toc_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
if (archDumpFormat != archNull)
- snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
else
- snprintf(toc_path, MAXPGPATH, "%s", filename);
+ snprintf(global_path, MAXPGPATH, "%s", filename);
- OPF = fopen(toc_path, PG_BINARY_A);
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- toc_path);
+ global_path);
}
}
@@ -1772,7 +1772,7 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&cmd);
/*
- * If this is non-plain format dump, then append file name and dump
+ * If this is not a plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index e4093427e2f..91602a2e37b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -46,8 +46,6 @@
#include <termios.h>
#endif
-#include "common/connect.h"
-#include "compress_io.h"
#include "common/string.h"
#include "connectdb.h"
#include "fe_utils/option_utils.h"
@@ -55,7 +53,6 @@
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
-#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
typedef struct SimpleDatabaseOidListCell
@@ -73,10 +70,10 @@ typedef struct SimpleDatabaseOidList
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
-static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool file_exists_in_directory(const char *dir, const char *filename);
static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num);
-static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
@@ -89,7 +86,6 @@ static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleDatabaseOidList *dbname_oid_list);
static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
const char *dbname);
-static void simple_string_full_list_delete(SimpleStringList *list);
static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
SimpleDatabaseOidListCell *cell,
@@ -521,8 +517,8 @@ main(int argc, char **argv)
* databases from map.dat(if exist) file list and skip restoring for
* --exclude-database patterns.
*/
- if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
- IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
{
PGconn *conn = NULL; /* Connection to restore global sql commands. */
@@ -578,7 +574,7 @@ main(int argc, char **argv)
}
/* Free db pattern list. */
- simple_string_full_list_delete(&db_exclude_patterns);
+ simple_string_list_destroy(&db_exclude_patterns);
}
else /* process if global.dat file does not exist. */
{
@@ -847,12 +843,12 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
}
/*
- * IsFileExistsInDirectory
+ * file_exists_in_directory
*
* Returns true if file exist in current directory.
*/
static bool
-IsFileExistsInDirectory(const char *dir, const char *filename)
+file_exists_in_directory(const char *dir, const char *filename)
{
struct stat st;
char buf[MAXPGPATH];
@@ -864,7 +860,7 @@ IsFileExistsInDirectory(const char *dir, const char *filename)
}
/*
- * ReadOneStatement
+ * read_one_statement
*
* This will start reading from passed file pointer using fgetc and read till
* semicolon(sql statement terminator for global.dat file)
@@ -873,7 +869,7 @@ IsFileExistsInDirectory(const char *dir, const char *filename)
*/
static int
-ReadOneStatement(StringInfo inBuf, FILE *pfile)
+read_one_statement(StringInfo inBuf, FILE *pfile)
{
int c; /* character read from getc() */
int m;
@@ -1064,7 +1060,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
* If there is only global.dat file in dump, then return from here as there
* is no database to restore.
*/
- if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
{
pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
return 0;
@@ -1281,7 +1277,7 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
initStringInfo(&sqlstatement);
/* Process file till EOF and execute sql statements. */
- while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
{
pg_log_info("executing query: %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
@@ -1393,28 +1389,6 @@ simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
list->tail = NULL;
}
-/*
- * simple_string_full_list_delete
- *
- * delete all cell from string list.
- */
-static void
-simple_string_full_list_delete(SimpleStringList *list)
-{
- SimpleStringListCell *cell = list->head;
- SimpleStringListCell *cellnext = NULL;
-
- while (cell)
- {
- cellnext = cell->next;
- pfree(cell);
- cell = cellnext;
- }
-
- list->head = NULL;
- list->tail = NULL;
-}
-
/*
* simple_db_oid_list_delete
*
On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
<alvherre@alvh.no-ip.org> wrote:Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of
characters
in dbname but as per code comments, as of now, we are not
supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the
contents
of map.dat as a shell string. After all, you're not going to
_execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster
databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
where all the similar stuff belongs, and it feels strange to have this
inline in pg_restore.c. (I also don't like the name much -
SimpleOidStringList or maybe SimpleOidPlusStringList might be better).
OK, I have done that, so here is the result. The first two are you
original patches. patch 3 adds the new list type to fe-utils, and patch
4 contains my cleanups and use of the new list type. Apart from some
relatively minor cleanup, the one thing I would like to change is how
dumps are named. If we are producing tar or custom format dumps, I think
the file names should reflect that (oid.dmp and oid.tar rather than a
bare oid as the filename), and pg_restore should look for those. I'm
going to work on that tomorrow - I don't think it will be terribly
difficult.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
0001-move-common-code-related-to-connection-to-new-the-fi.patchtext/x-patch; charset=UTF-8; name=0001-move-common-code-related-to-connection-to-new-the-fi.patchDownload
From 7105ed4a08b0c2b4d30e7b3eedb6c94882eb7421 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH 1/4] move common code related to connection to new the file
ConnectDatabase is used by both pg_dumpall, pg_restore
and pg_dump so move common code to new file.
new file name: connectdb.c
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 3 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 75 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 350 insertions(+), 341 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..3e1fbe98c25
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in the
+ * form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..9e1e7ef33d0
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..9031737d013 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
@@ -48,6 +49,7 @@ bin_targets += pg_dump
pg_dumpall_sources = files(
+ 'connectdb.c',
'pg_dumpall.c',
)
@@ -67,6 +69,7 @@ bin_targets += pg_dumpall
pg_restore_sources = files(
+ 'connectdb.c',
'pg_restore.c',
)
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..c68a21027fa 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,7 +293,7 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
+extern void ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 82d51c89ac6..3fd2818223c 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -414,7 +414,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4437,7 +4437,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5054,7 +5054,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..227dd963984 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
+ConnectDatabaseAhx(Archive *AHX,
const ConnParams *cparams,
bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e41e645f649..015a434fc0b 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -966,7 +966,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2ea574b0f06..573a8b61a45 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -129,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -499,19 +490,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1738,256 +1732,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.34.1
0002-pg_dumpall-with-directory-tar-custom-format-and-rest.patchtext/x-patch; charset=UTF-8; name=0002-pg_dumpall-with-directory-tar-custom-format-and-rest.patchDownload
From e3e3d4fc316b22202b7369947e00312e3acd92e2 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:30:12 +0530
Subject: [PATCH 2/4] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +++++++--
src/bin/pg_dump/pg_restore.c | 847 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1244 insertions(+), 76 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..82ea2028469 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,10 +121,86 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can be omitted only when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f0a24134595 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index c68a21027fa..89459dedc4b 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3fd2818223c..e22a8810b45 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -337,9 +337,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -456,7 +461,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1292,7 +1297,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1671,7 +1676,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1691,7 +1697,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ae433132435 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index ba042016879..bbefdc112f5 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
pg_noreturn extern void exit_nicely(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 015a434fc0b..608b0696c03 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1219,7 +1219,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 573a8b61a45..a3dcc585ace 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -195,6 +200,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -244,7 +251,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -272,7 +279,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -421,6 +430,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If non-plain format is specified then we must provide the
+ * file name to create one main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -483,6 +507,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char toc_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -522,19 +573,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -634,7 +672,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -647,7 +685,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -658,12 +696,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1570,10 +1610,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1587,7 +1630,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1595,9 +1638,33 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1612,6 +1679,18 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is non-plain dump format, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1630,9 +1709,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1641,19 +1728,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char toc_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(toc_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(toc_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ toc_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1663,7 +1761,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1672,17 +1771,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is non-plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1827,3 +1945,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if(!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 47f7b0dd3a1..e4093427e2f 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,30 +41,77 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/connect.h"
+#include "compress_io.h"
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
+#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
+typedef struct SimpleDatabaseOidListCell
+{
+ struct SimpleDatabaseOidListCell *next;
+ Oid db_oid;
+ const char *db_name;
+} SimpleDatabaseOidListCell;
+
+typedef struct SimpleDatabaseOidList
+{
+ SimpleDatabaseOidListCell *head;
+ SimpleDatabaseOidListCell *tail;
+} SimpleDatabaseOidList;
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleDatabaseOidList *dbname_oid_list);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname);
+static void simple_string_full_list_delete(SimpleStringList *list);
+static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
+static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev);
+static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
+ Oid db_oid, const char *dbname);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -90,6 +137,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -144,6 +192,7 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -172,7 +221,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -199,11 +248,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -318,6 +370,10 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6:
+ /* list of databases patterns those needs to skip while restoring */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -345,6 +401,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -452,6 +515,108 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
+ IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should be specified.
+ * Even there is single database in dump, report error because it might be possible
+ * that database hasn't created so better we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_full_list_delete(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -460,8 +625,14 @@ main(int argc, char **argv)
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op.
+ * If we are restoring multiple databases, then save index of exit_nicely
+ * so that we can use same slot for all the databases as we already closed
+ * the previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -481,25 +652,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -517,6 +685,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -529,6 +698,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -569,8 +739,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -675,3 +845,648 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * IsFileExistsInDirectory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+IsFileExistsInDirectory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * ReadOneStatement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+ReadOneStatement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if(c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will remove entries from dbname_oid_list that pattern matching any
+ * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ *
+ * returns, number of database will be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleDatabaseOidList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
+ SimpleDatabaseOidListCell *dboidprecell = NULL;
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ /* Return 0 if there is no database to restore. */
+ if (dboid_cell == NULL)
+ return 0;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ while (dboid_cell != NULL)
+ {
+ bool skip_db_restore = false;
+ SimpleDatabaseOidListCell *next = dboid_cell->next;
+
+ for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ {
+ /*
+ * the construct pattern matching query:
+ * SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE
+ * pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from the
+ * dbname_oid_list, which is initially extracted from the map.dat
+ * file located in the backup directory. that's why we need
+ * quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, celldb->val, false,
+ false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ celldb->val);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
+ simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ }
+ else
+ {
+ count_db++;
+ dboidprecell = dboid_cell;
+ }
+
+ /* Process next dbname from dbname list. */
+ dboid_cell = next;
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as there
+ * is no database to restore.
+ */
+ if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u" , &db_oid);
+ sscanf(line, "%s" , db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making
+ * a list of all the databases.
+ */
+ simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
+ SimpleDatabaseOidListCell *dboid_cell;
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing
+ * global.dat file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored
+ * after skipping names of exclude-database. Now we can launch parallel
+ * workers to restore these databases.
+ */
+ dboid_cell = dbname_oid_list.head;
+
+ while(dboid_cell != NULL)
+ {
+ char subdirpath[MAXPGPATH];
+ int n_errors;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored into
+ * already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+
+ pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ }
+
+ dboid_cell = dboid_cell->next;
+ count++;
+ }
+
+ /* Log number of processed databases.*/
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_db_oid_full_list_delete(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * simple_db_oid_list_append
+ *
+ * appends a node to the list in the end.
+ */
+static void
+simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
+ const char *dbname)
+{
+ SimpleDatabaseOidListCell *cell;
+
+ cell = pg_malloc_object(SimpleDatabaseOidListCell);
+
+ cell->next = NULL;
+ cell->db_oid = db_oid;
+ cell->db_name = pg_strdup(dbname);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * simple_db_oid_full_list_delete
+ *
+ * delete all cell from dbname and dboid list.
+ */
+static void
+simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
+{
+ SimpleDatabaseOidListCell *cell = list->head;
+ SimpleDatabaseOidListCell *nextcell = NULL;
+
+ while (cell)
+ {
+ nextcell = cell->next;
+ pfree (cell);
+ cell = nextcell;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_string_full_list_delete
+ *
+ * delete all cell from string list.
+ */
+static void
+simple_string_full_list_delete(SimpleStringList *list)
+{
+ SimpleStringListCell *cell = list->head;
+ SimpleStringListCell *cellnext = NULL;
+
+ while (cell)
+ {
+ cellnext = cell->next;
+ pfree(cell);
+ cell = cellnext;
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+}
+
+/*
+ * simple_db_oid_list_delete
+ *
+ * delete cell from database and oid list.
+ */
+static void
+simple_db_oid_list_delete(SimpleDatabaseOidList *list,
+ SimpleDatabaseOidListCell *cell,
+ SimpleDatabaseOidListCell *prev)
+{
+ if (prev == NULL)
+ {
+ list->head = cell->next;
+ pfree(cell);
+ }
+ else
+ {
+ prev->next = cell->next;
+ pfree(cell);
+ }
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 3fbf5a4c212..f3999ee3f9d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2726,6 +2726,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.34.1
0003-add-new-list-type-simple_oid_string_list-to-fe-utils.patchtext/x-patch; charset=UTF-8; name=0003-add-new-list-type-simple_oid_string_list-to-fe-utils.patchDownload
From bfc7c37284cf38ecad18b79c6e08a9fda06512eb Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:24 -0400
Subject: [PATCH 3/4] add new list type simple_oid_string_list to
fe-utils/simple_list
---
src/fe_utils/simple_list.c | 41 ++++++++++++++++++++++++++++++
src/include/fe_utils/simple_list.h | 16 ++++++++++++
2 files changed, 57 insertions(+)
diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c
index 483d5455594..b0686e57c4a 100644
--- a/src/fe_utils/simple_list.c
+++ b/src/fe_utils/simple_list.c
@@ -192,3 +192,44 @@ simple_ptr_list_destroy(SimplePtrList *list)
cell = next;
}
}
+
+/*
+ * Add to an oid_string list
+ */
+void
+simple_oid_string_list_append(SimpleOidStringList *list, Oid oid, const char *str)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = (SimpleOidStringListCell *)
+ pg_malloc(offsetof(SimpleOidStringListCell, str) + strlen(str) + 1);
+
+ cell->next = NULL;
+ cell->oid = oid;
+ strcpy(cell->str, str);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * Destroy an oid_string list
+ */
+void
+simple_oid_string_list_destroy(SimpleOidStringList *list)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = list->head;
+ while (cell != NULL)
+ {
+ SimpleOidStringListCell *next;
+
+ next = cell->next;
+ pg_free(cell);
+ cell = next;
+ }
+}
diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h
index 3b8e38414ec..d5492408d6c 100644
--- a/src/include/fe_utils/simple_list.h
+++ b/src/include/fe_utils/simple_list.h
@@ -55,6 +55,19 @@ typedef struct SimplePtrList
SimplePtrListCell *tail;
} SimplePtrList;
+typedef struct SimpleOidStringListCell
+{
+ struct SimpleOidStringListCell *next;
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} SimpleOidStringListCell;
+
+typedef struct SimpleOidStringList
+{
+ SimpleOidStringListCell *head;
+ SimpleOidStringListCell *tail;
+} SimpleOidStringList;
+
extern void simple_oid_list_append(SimpleOidList *list, Oid val);
extern bool simple_oid_list_member(SimpleOidList *list, Oid val);
extern void simple_oid_list_destroy(SimpleOidList *list);
@@ -68,4 +81,7 @@ extern const char *simple_string_list_not_touched(SimpleStringList *list);
extern void simple_ptr_list_append(SimplePtrList *list, void *ptr);
extern void simple_ptr_list_destroy(SimplePtrList *list);
+extern void simple_oid_string_list_append(SimpleOidStringList *list, Oid oid, const char *str);
+extern void simple_oid_string_list_destroy(SimpleOidStringList *list);
+
#endif /* SIMPLE_LIST_H */
--
2.34.1
0004-cleanups-and-use-new-simple-list-type.patchtext/x-patch; charset=UTF-8; name=0004-cleanups-and-use-new-simple-list-type.patchDownload
From b51df1f2584f99d67381a3702cec87cb3264e4a3 Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:57 -0400
Subject: [PATCH 4/4] cleanups and use new simple list type
---
src/bin/pg_dump/pg_dumpall.c | 28 ++---
src/bin/pg_dump/pg_restore.c | 208 +++++++----------------------------
2 files changed, 56 insertions(+), 180 deletions(-)
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index a3dcc585ace..6aab1bfe831 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -434,13 +434,13 @@ main(int argc, char *argv[])
archDumpFormat = parseDumpFormat(formatName);
/*
- * If non-plain format is specified then we must provide the
- * file name to create one main directory.
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
*/
if (archDumpFormat != archNull &&
(!filename || strcmp(filename, "") == 0))
{
- pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string");
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
@@ -513,14 +513,14 @@ main(int argc, char *argv[])
*/
if (archDumpFormat != archNull)
{
- char toc_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
/* Create new directory or accept the empty existing directory. */
create_or_open_dir(filename);
- snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
- OPF = fopen(toc_path, PG_BINARY_W);
+ OPF = fopen(global_path, PG_BINARY_W);
if (!OPF)
pg_fatal("could not open global.dat file: %s", strerror(errno));
}
@@ -1680,7 +1680,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
}
/*
- * If this is non-plain dump format, then append dboid and dbname to
+ * If this is not a plain format dump, then append dboid and dbname to
* the map.dat file.
*/
if (archDumpFormat != archNull)
@@ -1688,7 +1688,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
/* Put one line entry for dboid and dbname in map file. */
- fprintf(map_file, "%s %s\n", oid, pg_strdup(dbname));
+ fprintf(map_file, "%s %s\n", oid, dbname);
}
pg_log_info("dumping database \"%s\"", dbname);
@@ -1734,17 +1734,17 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
if (filename)
{
- char toc_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
if (archDumpFormat != archNull)
- snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename);
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
else
- snprintf(toc_path, MAXPGPATH, "%s", filename);
+ snprintf(global_path, MAXPGPATH, "%s", filename);
- OPF = fopen(toc_path, PG_BINARY_A);
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- toc_path);
+ global_path);
}
}
@@ -1772,7 +1772,7 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&cmd);
/*
- * If this is non-plain format dump, then append file name and dump
+ * If this is not a plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index e4093427e2f..44a24791a6e 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -46,8 +46,6 @@
#include <termios.h>
#endif
-#include "common/connect.h"
-#include "compress_io.h"
#include "common/string.h"
#include "connectdb.h"
#include "fe_utils/option_utils.h"
@@ -55,47 +53,24 @@
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
-#include "pg_backup_archiver.h"
#include "pg_backup_utils.h"
-typedef struct SimpleDatabaseOidListCell
-{
- struct SimpleDatabaseOidListCell *next;
- Oid db_oid;
- const char *db_name;
-} SimpleDatabaseOidListCell;
-
-typedef struct SimpleDatabaseOidList
-{
- SimpleDatabaseOidListCell *head;
- SimpleDatabaseOidListCell *tail;
-} SimpleDatabaseOidList;
-
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
-static bool IsFileExistsInDirectory(const char *dir, const char *filename);
+static bool file_exists_in_directory(const char *dir, const char *filename);
static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num);
-static int ReadOneStatement(StringInfo inBuf, FILE *pfile);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
const char *outfile);
static void copy_or_print_global_file(const char *outfile, FILE *pfile);
static int get_dbnames_list_to_restore(PGconn *conn,
- SimpleDatabaseOidList *dbname_oid_list,
+ SimpleOidStringList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
- SimpleDatabaseOidList *dbname_oid_list);
-static void simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname);
-static void simple_string_full_list_delete(SimpleStringList *list);
-static void simple_db_oid_full_list_delete(SimpleDatabaseOidList *list);
-static void simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell,
- SimpleDatabaseOidListCell *prev);
-static void simple_db_oid_list_append(SimpleDatabaseOidList *list,
- Oid db_oid, const char *dbname);
+ SimpleOidStringList *dbname_oid_list);
static size_t quote_literal_internal(char *dst, const char *src, size_t len);
static char *quote_literal_cstr(const char *rawstr);
static int on_exit_index = 0;
@@ -521,8 +496,8 @@ main(int argc, char **argv)
* databases from map.dat(if exist) file list and skip restoring for
* --exclude-database patterns.
*/
- if (inputFileSpec != NULL && !IsFileExistsInDirectory(inputFileSpec, "toc.dat") &&
- IsFileExistsInDirectory(inputFileSpec, "global.dat"))
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
{
PGconn *conn = NULL; /* Connection to restore global sql commands. */
@@ -578,7 +553,7 @@ main(int argc, char **argv)
}
/* Free db pattern list. */
- simple_string_full_list_delete(&db_exclude_patterns);
+ simple_string_list_destroy(&db_exclude_patterns);
}
else /* process if global.dat file does not exist. */
{
@@ -847,12 +822,12 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
}
/*
- * IsFileExistsInDirectory
+ * file_exists_in_directory
*
* Returns true if file exist in current directory.
*/
static bool
-IsFileExistsInDirectory(const char *dir, const char *filename)
+file_exists_in_directory(const char *dir, const char *filename)
{
struct stat st;
char buf[MAXPGPATH];
@@ -864,7 +839,7 @@ IsFileExistsInDirectory(const char *dir, const char *filename)
}
/*
- * ReadOneStatement
+ * read_one_statement
*
* This will start reading from passed file pointer using fgetc and read till
* semicolon(sql statement terminator for global.dat file)
@@ -873,7 +848,7 @@ IsFileExistsInDirectory(const char *dir, const char *filename)
*/
static int
-ReadOneStatement(StringInfo inBuf, FILE *pfile)
+read_one_statement(StringInfo inBuf, FILE *pfile)
{
int c; /* character read from getc() */
int m;
@@ -941,27 +916,21 @@ ReadOneStatement(StringInfo inBuf, FILE *pfile)
/*
* get_dbnames_list_to_restore
*
- * This will remove entries from dbname_oid_list that pattern matching any
- * in the db_exclude_patterns list. dbname_oid_list maybe inplace modified.
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
*
- * returns, number of database will be restored.
+ * Returns the number of database to be restored.
*
*/
static int
get_dbnames_list_to_restore(PGconn *conn,
- SimpleDatabaseOidList *dbname_oid_list,
+ SimpleOidStringList *dbname_oid_list,
SimpleStringList db_exclude_patterns)
{
- SimpleDatabaseOidListCell *dboid_cell = dbname_oid_list->head;
- SimpleDatabaseOidListCell *dboidprecell = NULL;
int count_db = 0;
PQExpBuffer query;
PGresult *res;
- /* Return 0 if there is no database to restore. */
- if (dboid_cell == NULL)
- return 0;
-
query = createPQExpBuffer();
if (!conn)
@@ -971,12 +940,12 @@ get_dbnames_list_to_restore(PGconn *conn,
* Process one by one all dbnames and if specified to skip restoring, then
* remove dbname from list.
*/
- while (dboid_cell != NULL)
+ for (SimpleOidStringListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
{
bool skip_db_restore = false;
- SimpleDatabaseOidListCell *next = dboid_cell->next;
- for (SimpleStringListCell *celldb = db_exclude_patterns.head; celldb; celldb = celldb->next)
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
{
/*
* the construct pattern matching query:
@@ -990,21 +959,21 @@ get_dbnames_list_to_restore(PGconn *conn,
*
* If no db connection, then consider PATTERN as NAME.
*/
- if (pg_strcasecmp(dboid_cell->db_name, celldb->val) == 0)
+ if (pg_strcasecmp(db_cell->str, pat_cell->val) == 0)
skip_db_restore = true;
else if (conn)
{
int dotcnt;
appendPQExpBufferStr(query, "SELECT 1 ");
- processSQLNamePattern(conn, query, celldb->val, false,
- false, NULL, quote_literal_cstr(dboid_cell->db_name),
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, quote_literal_cstr(db_cell->str),
NULL, NULL, NULL, &dotcnt);
if (dotcnt > 0)
{
pg_log_error("improper qualified name (too many dotted names): %s",
- celldb->val);
+ db_cell->str);
PQfinish(conn);
exit_nicely(1);
}
@@ -1014,7 +983,7 @@ get_dbnames_list_to_restore(PGconn *conn,
if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
{
skip_db_restore = true;
- pg_log_info("database \"%s\" is matching with exclude pattern: \"%s\"", dboid_cell->db_name, celldb->val);
+ pg_log_info("database \"%s\" matches exclude pattern: \"%s\"", db_cell->str, pat_cell->val);
}
PQclear(res);
@@ -1028,17 +997,13 @@ get_dbnames_list_to_restore(PGconn *conn,
/* Increment count if database needs to be restored. */
if (skip_db_restore)
{
- pg_log_info("excluding database \"%s\"", dboid_cell->db_name);
- simple_db_oid_list_delete(dbname_oid_list, dboid_cell, dboidprecell);
+ pg_log_info("excluding database \"%s\"", db_cell->str);
+ db_cell->oid = InvalidOid;
}
else
{
count_db++;
- dboidprecell = dboid_cell;
}
-
- /* Process next dbname from dbname list. */
- dboid_cell = next;
}
return count_db;
@@ -1053,7 +1018,7 @@ get_dbnames_list_to_restore(PGconn *conn,
* Returns, total number of database names in map.dat file.
*/
static int
-get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list)
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleOidStringList *dbname_oid_list)
{
FILE *pfile;
char map_file_path[MAXPGPATH];
@@ -1064,7 +1029,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
* If there is only global.dat file in dump, then return from here as there
* is no database to restore.
*/
- if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat"))
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
{
pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
return 0;
@@ -1087,7 +1052,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
/* Extract dboid. */
sscanf(line, "%u" , &db_oid);
- sscanf(line, "%s" , db_oid_str);
+ sscanf(line, "%20s" , db_oid_str);
/* Now copy dbname. */
strcpy(dbname, line + strlen(db_oid_str) + 1);
@@ -1106,7 +1071,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *d
* needs to skipped for restore or not but as of now, we are making
* a list of all the databases.
*/
- simple_db_oid_list_append(dbname_oid_list, db_oid, dbname);
+ simple_oid_string_list_append(dbname_oid_list, db_oid, dbname);
count++;
}
@@ -1132,8 +1097,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts,
int numWorkers)
{
- SimpleDatabaseOidList dbname_oid_list = {NULL, NULL};
- SimpleDatabaseOidListCell *dboid_cell;
+ SimpleOidStringList dbname_oid_list = {NULL, NULL};
int num_db_restore = 0;
int num_total_db;
int n_errors_total;
@@ -1183,7 +1147,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
PQfinish(conn);
/* Exit if no db needs to be restored. */
- if (dbname_oid_list.head == NULL)
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
{
pg_log_info("no database needs to restore out of %d databases", num_total_db);
return n_errors_total;
@@ -1196,13 +1160,16 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* after skipping names of exclude-database. Now we can launch parallel
* workers to restore these databases.
*/
- dboid_cell = dbname_oid_list.head;
-
- while(dboid_cell != NULL)
+ for (SimpleOidStringListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
{
char subdirpath[MAXPGPATH];
int n_errors;
+ /* ignore dbs marked for skipping */
+ if (db_cell->oid == InvalidOid)
+ continue;
+
/*
* We need to reset override_dbname so that objects can be restored into
* already created database. (used with -d/--dbname option)
@@ -1213,9 +1180,9 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
opts->cparams.override_dbname = NULL;
}
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dboid_cell->db_oid);
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
- pg_log_info("restoring database \"%s\"", dboid_cell->db_name);
+ pg_log_info("restoring database \"%s\"", db_cell->str);
/* Restore single database. */
n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
@@ -1224,10 +1191,9 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
if (n_errors)
{
n_errors_total += n_errors;
- pg_log_warning("errors ignored on database \"%s\" restore: %d", dboid_cell->db_name, n_errors);
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", db_cell->str, n_errors);
}
- dboid_cell = dboid_cell->next;
count++;
}
@@ -1235,7 +1201,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("number of restored databases are %d", num_db_restore);
/* Free dbname and dboid list. */
- simple_db_oid_full_list_delete(&dbname_oid_list);
+ simple_oid_string_list_destroy(&dbname_oid_list);
return n_errors_total;
}
@@ -1281,7 +1247,7 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
initStringInfo(&sqlstatement);
/* Process file till EOF and execute sql statements. */
- while (ReadOneStatement(&sqlstatement, pfile) != EOF)
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
{
pg_log_info("executing query: %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
@@ -1347,96 +1313,6 @@ copy_or_print_global_file(const char *outfile, FILE *pfile)
fclose(OPF);
}
-/*
- * simple_db_oid_list_append
- *
- * appends a node to the list in the end.
- */
-static void
-simple_db_oid_list_append(SimpleDatabaseOidList *list, Oid db_oid,
- const char *dbname)
-{
- SimpleDatabaseOidListCell *cell;
-
- cell = pg_malloc_object(SimpleDatabaseOidListCell);
-
- cell->next = NULL;
- cell->db_oid = db_oid;
- cell->db_name = pg_strdup(dbname);
-
- if (list->tail)
- list->tail->next = cell;
- else
- list->head = cell;
- list->tail = cell;
-}
-
-/*
- * simple_db_oid_full_list_delete
- *
- * delete all cell from dbname and dboid list.
- */
-static void
-simple_db_oid_full_list_delete(SimpleDatabaseOidList *list)
-{
- SimpleDatabaseOidListCell *cell = list->head;
- SimpleDatabaseOidListCell *nextcell = NULL;
-
- while (cell)
- {
- nextcell = cell->next;
- pfree (cell);
- cell = nextcell;
- }
-
- list->head = NULL;
- list->tail = NULL;
-}
-
-/*
- * simple_string_full_list_delete
- *
- * delete all cell from string list.
- */
-static void
-simple_string_full_list_delete(SimpleStringList *list)
-{
- SimpleStringListCell *cell = list->head;
- SimpleStringListCell *cellnext = NULL;
-
- while (cell)
- {
- cellnext = cell->next;
- pfree(cell);
- cell = cellnext;
- }
-
- list->head = NULL;
- list->tail = NULL;
-}
-
-/*
- * simple_db_oid_list_delete
- *
- * delete cell from database and oid list.
- */
-static void
-simple_db_oid_list_delete(SimpleDatabaseOidList *list,
- SimpleDatabaseOidListCell *cell,
- SimpleDatabaseOidListCell *prev)
-{
- if (prev == NULL)
- {
- list->head = cell->next;
- pfree(cell);
- }
- else
- {
- prev->next = cell->next;
- pfree(cell);
- }
-}
-
/*
* quote_literal_internal
*/
--
2.34.1
On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
<alvherre@alvh.no-ip.org> wrote:Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of
characters
in dbname but as per code comments, as of now, we are not
supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the
contents
of map.dat as a shell string. After all, you're not going to
_execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster
databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
where all the similar stuff belongs, and it feels strange to have this
inline in pg_restore.c. (I also don't like the name much -
SimpleOidStringList or maybe SimpleOidPlusStringList might be better).OK, I have done that, so here is the result. The first two are you
original patches. patch 3 adds the new list type to fe-utils, and patch
4 contains my cleanups and use of the new list type. Apart from some
relatively minor cleanup, the one thing I would like to change is how
dumps are named. If we are producing tar or custom format dumps, I think
the file names should reflect that (oid.dmp and oid.tar rather than a
bare oid as the filename), and pg_restore should look for those. I'm
going to work on that tomorrow - I don't think it will be terribly
difficult.
Thanks Andrew.
Here, I am attaching a delta patch for oid.tar and oid.dmp format.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta-0001-pg_dumpall-dump-as-tar-and-dmp-file-for-file.patchapplication/octet-stream; name=delta-0001-pg_dumpall-dump-as-tar-and-dmp-file-for-file.patchDownload
From b43ae117fe3809b82abb5bc89fc62d45a5707ff6 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Sat, 29 Mar 2025 10:14:18 +0530
Subject: [PATCH] pg_dumpall - dump as .tar and .dmp file for tar and custom
format
---
src/bin/pg_dump/pg_dumpall.c | 7 ++++++-
src/bin/pg_dump/pg_restore.c | 22 +++++++++++++++++++++-
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 6aab1bfe831..12983d973be 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1685,7 +1685,12 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
*/
if (archDumpFormat != archNull)
{
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
/* Put one line entry for dboid and dbname in map file. */
fprintf(map_file, "%s %s\n", oid, dbname);
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 44a24791a6e..31cd9c84c5a 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -1164,6 +1164,8 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
db_cell; db_cell = db_cell->next)
{
char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
int n_errors;
/* ignore dbs marked for skipping */
@@ -1180,7 +1182,25 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
opts->cparams.override_dbname = NULL;
}
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ /*
+ * Validate database dump file. If there is .tar or .dmp file exist
+ * then consider particular file, otherwise just append dboid to the
+ * databases folder.
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ }
pg_log_info("restoring database \"%s\"", db_cell->str);
--
2.39.3
On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote:
On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
<alvherre@alvh.no-ip.org> wrote:Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of
characters
in dbname but as per code comments, as of now, we are not
supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the
contents
of map.dat as a shell string. After all, you're not going to
_execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering dbname
containing newline.
the left dumped plain file does not contain all the cluster
databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
where all the similar stuff belongs, and it feels strange to have this
inline in pg_restore.c. (I also don't like the name much -
SimpleOidStringList or maybe SimpleOidPlusStringList might be better).OK, I have done that, so here is the result. The first two are you
original patches. patch 3 adds the new list type to fe-utils, and patch
4 contains my cleanups and use of the new list type. Apart from some
relatively minor cleanup, the one thing I would like to change is how
dumps are named. If we are producing tar or custom format dumps, I think
the file names should reflect that (oid.dmp and oid.tar rather than a
bare oid as the filename), and pg_restore should look for those. I'm
going to work on that tomorrow - I don't think it will be terribly
difficult.Thanks Andrew.
Here, I am attaching a delta patch for oid.tar and oid.dmp format.
OK, looks good, I have incorporated that.
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "
--
-- Roles
--
CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database "template1"
already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database "postgres"
already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
= 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3
It seems pointless to be trying to create the rolw that we are connected
as, and we also expect template1 and postgres to exist.
In a similar vein, I don't see why we are setting the --create flag in
pg_dumpall for those databases. I'm attaching a patch that is designed
to stop that, but it doesn't solve the above issues.
I also notice a bunch of these in globals.dat:
--
-- Databases
--
--
-- Database "template1" dump
--
--
-- Database "andrew" dump
--
--
-- Database "isolation_regression_brin" dump
--
--
-- Database "isolation_regression_delay_execution" dump
--
...
The patch also tries to fix this.
Lastly, this badly needs some TAP tests written.
I'm going to work on reviewing the documentation next.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
v20250330-0001-Move-common-pg_dump-code-related-to-connec.patchtext/x-patch; charset=UTF-8; name=v20250330-0001-Move-common-pg_dump-code-related-to-connec.patchDownload
From ed53d8b5ad82e49bb56bc5bc48ddb8426fdb4c80 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH v20250330 1/3] Move common pg_dump code related to connections
to a new file
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 6 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 79 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 352 insertions(+), 345 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..9e593b70e81
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in
+ * the form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..6c1e1954769
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..25989e8f16b 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..49bc1ee71ef 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,9 +293,9 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect);
+extern void ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
extern PGconn *GetConnection(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 1d131e5a57d..3f59f8f9d9d 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -415,7 +415,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4458,7 +4458,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5076,7 +5076,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..5c349279beb 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect)
+ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 84a78625820..bfa70369c47 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -966,7 +966,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2ea574b0f06..573a8b61a45 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -129,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -499,19 +490,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1738,256 +1732,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.34.1
v20250330-0002-add-new-list-type-simple_oid_string_list-t.patchtext/x-patch; charset=UTF-8; name=v20250330-0002-add-new-list-type-simple_oid_string_list-t.patchDownload
From 17af42d16fa7f47a7e07a0e5bb02f323b224c680 Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:24 -0400
Subject: [PATCH v20250330 2/3] add new list type simple_oid_string_list to
fe-utils/simple_list
---
src/fe_utils/simple_list.c | 41 ++++++++++++++++++++++++++++++
src/include/fe_utils/simple_list.h | 16 ++++++++++++
2 files changed, 57 insertions(+)
diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c
index 483d5455594..bbcc4ef618d 100644
--- a/src/fe_utils/simple_list.c
+++ b/src/fe_utils/simple_list.c
@@ -192,3 +192,44 @@ simple_ptr_list_destroy(SimplePtrList *list)
cell = next;
}
}
+
+/*
+ * Add to an oid_string list
+ */
+void
+simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = (SimpleOidStringListCell *)
+ pg_malloc(offsetof(SimpleOidStringListCell, str) + strlen(str) + 1);
+
+ cell->next = NULL;
+ cell->oid = oid;
+ strcpy(cell->str, str);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * Destroy an oid_string list
+ */
+void
+simple_oid_string_list_destroy(SimpleOidStringList * list)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = list->head;
+ while (cell != NULL)
+ {
+ SimpleOidStringListCell *next;
+
+ next = cell->next;
+ pg_free(cell);
+ cell = next;
+ }
+}
diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h
index 3b8e38414ec..af61545d7ff 100644
--- a/src/include/fe_utils/simple_list.h
+++ b/src/include/fe_utils/simple_list.h
@@ -55,6 +55,19 @@ typedef struct SimplePtrList
SimplePtrListCell *tail;
} SimplePtrList;
+typedef struct SimpleOidStringListCell
+{
+ struct SimpleOidStringListCell *next;
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} SimpleOidStringListCell;
+
+typedef struct SimpleOidStringList
+{
+ SimpleOidStringListCell *head;
+ SimpleOidStringListCell *tail;
+} SimpleOidStringList;
+
extern void simple_oid_list_append(SimpleOidList *list, Oid val);
extern bool simple_oid_list_member(SimpleOidList *list, Oid val);
extern void simple_oid_list_destroy(SimpleOidList *list);
@@ -68,4 +81,7 @@ extern const char *simple_string_list_not_touched(SimpleStringList *list);
extern void simple_ptr_list_append(SimplePtrList *list, void *ptr);
extern void simple_ptr_list_destroy(SimplePtrList *list);
+extern void simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str);
+extern void simple_oid_string_list_destroy(SimpleOidStringList * list);
+
#endif /* SIMPLE_LIST_H */
--
2.34.1
v20250330-0003-pg_dumpall-with-directory-tar-custom-forma.patchtext/x-patch; charset=UTF-8; name=v20250330-0003-pg_dumpall-with-directory-tar-custom-forma.patchDownload
From 95ddbd6726ebb0eb573a5fbaa80aac9f5a4e7cfb Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:30:12 +0530
Subject: [PATCH v20250330 3/3] pg_dumpall with directory|tar|custom format and
restore it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
doc/src/sgml/ref/pg_dumpall.sgml | 78 ++-
doc/src/sgml/ref/pg_restore.sgml | 41 +-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 284 ++++++++--
src/bin/pg_dump/pg_restore.c | 750 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1152 insertions(+), 77 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..82ea2028469 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster based on specified dump format </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -121,10 +121,86 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can be omitted only when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify format of dump files. If we want to dump all the databases,
+ then pass this as non-plain so that dump of all databases can be taken
+ in separate subdirectory in archive format.
+ by default, this is plain format.
+
+ If non-plain mode is passed, then global.dat (global sql commands) and
+ map.dat(dboid and dbnames list of all the databases) files will be created.
+ Apart from these files, one subdirectory with databases name will be created.
+ Under this databases subdirectory, there will be files with dboid name for each
+ database and if <option>--format</option> is directory, then toc.dat and other
+ dump files will be under dboid subdirectory.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output a directory-format archive suitable for input into pg_restore. Under dboid
+ subdirectory, this will create a directory with one file for each table and large
+ object being dumped, plus a so-called Table of Contents file describing the dumped
+ objects in a machine-readable format that pg_restore can read. A directory format
+ archive can be manipulated with standard Unix tools; for example, files in an
+ uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This
+ format is compressed by default using gzip and also supports parallel dumps.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive suitable for input into pg_restore. Together with the
+ directory output format, this is the most flexible output format in that it allows manual
+ selection and reordering of archived items during restore. This format is also
+ compressed by default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive suitable for input into pg_restore. The tar format is
+ compatible with the directory format: extracting a tar-format archive produces a valid
+ directory-format archive. However, the tar format does not support compression. Also,
+ when using tar format the relative order of table data items cannot be changed during restore.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f0a24134595 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -37,9 +38,10 @@ PostgreSQL documentation
<title>Description</title>
<para>
- <application>pg_restore</application> is a utility for restoring a
+ <application>pg_restore</application> is a utility for restoring
<productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
database to the state it was in at the time it was saved. The
archive files also allow <application>pg_restore</application> to
@@ -140,6 +142,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from dump of <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +170,25 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +338,16 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 49bc1ee71ef..17d6e06ec25 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3f59f8f9d9d..54eb4728928 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -338,9 +338,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -457,7 +462,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1293,7 +1298,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1672,7 +1677,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1692,7 +1698,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..dc045b852e9 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index ba042016879..1ce1077096d 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
pg_noreturn extern void exit_nicely(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index bfa70369c47..d8a62736ef1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1219,7 +1219,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 573a8b61a45..7ceaba27419 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -195,6 +200,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -244,7 +251,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -272,7 +279,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -421,6 +430,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -483,6 +507,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(global_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -522,19 +573,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -634,7 +672,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -647,7 +685,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -658,12 +696,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1570,10 +1610,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1587,7 +1630,7 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
@@ -1595,9 +1638,34 @@ dumpDatabases(PGconn *conn)
if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified then create a subdirectory
+ * under the main directory and each database dump file subdirectory will
+ * be created under the subdirectory in archive mode as per single db
+ * pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts;
int ret;
@@ -1612,6 +1680,23 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
@@ -1630,9 +1715,17 @@ dumpDatabases(PGconn *conn)
create_opts = "--clean --create";
else
{
- create_opts = "";
/* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ if (archDumpFormat == archNull)
+ {
+ create_opts = "";
+ fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
+ else
+ {
+ /* Dumping all databases so add --create option. */
+ create_opts = "--create";
+ }
}
}
else
@@ -1641,19 +1734,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char global_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ global_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1663,7 +1767,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1672,17 +1777,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1827,3 +1951,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if (!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 47f7b0dd3a1..ce70b7e12b2 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,11 +41,15 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -53,18 +57,36 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
+static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleOidStringList * dbname_oid_list);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -90,6 +112,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -144,6 +167,7 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -172,7 +196,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -199,11 +223,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -318,6 +345,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -345,6 +375,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -452,6 +489,114 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql
+ * commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should
+ * be specified. Even there is single database in dump, report error
+ * because it might be possible that database hasn't created so better
+ * we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat
+ * file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restoreAllDatabases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restoreOneDatabase(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restoreOneDatabase
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restoreOneDatabase(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -459,9 +604,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then save index of exit_nicely so that we
+ * can use same slot for all the databases as we already closed the
+ * previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -481,25 +632,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -517,6 +665,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -529,6 +678,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -569,8 +719,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -675,3 +825,569 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * read_one_statement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+read_one_statement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if (c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ bool skip_db_restore = false;
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * the construct pattern matching query: SELECT 1 WHERE XXX
+ * OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from
+ * the dbname_oid_list, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(db_cell->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, quote_literal_cstr(db_cell->str),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ db_cell->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" matches exclude pattern: \"%s\"", db_cell->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", db_cell->str);
+ db_cell->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleOidStringList * dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while ((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u", &db_oid);
+ sscanf(line, "%20s", db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making a
+ * list of all the databases.
+ */
+ simple_oid_string_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restoreAllDatabases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleOidStringList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing global.dat
+ * file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_retsore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored after
+ * skipping names of exclude-database. Now we can launch parallel workers
+ * to restore these databases.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (db_cell->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ /*
+ * Validate database dump file. If there is .tar or .dmp file exist
+ * then consider particular file, otherwise just append dboid to the
+ * databases folder.
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", db_cell->str);
+
+ /* Restore single database. */
+ n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", db_cell->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_oid_string_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* Process file till EOF and execute sql statements. */
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
+ {
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b66cecd8799..95ec8fbb141 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2732,6 +2732,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.34.1
dumpall_cleanup.patch2-nocitext/plain; charset=UTF-8; name=dumpall_cleanup.patch2-nociDownload
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 7ceaba27419..7a06e595b88 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1639,10 +1639,9 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
fprintf(OPF, "--\n-- Databases\n--\n\n");
/*
- * If directory/tar/custom format is specified then create a subdirectory
- * under the main directory and each database dump file subdirectory will
- * be created under the subdirectory in archive mode as per single db
- * pg_dump.
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
*/
if (archDumpFormat != archNull)
{
@@ -1666,7 +1665,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
char *dbname = PQgetvalue(res, i, 0);
char *oid = PQgetvalue(res, i, 1);
- const char *create_opts;
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1699,7 +1698,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1713,20 +1713,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- /* Since pg_dump won't emit a \connect command, we must */
- if (archDumpFormat == archNull)
- {
- create_opts = "";
- fprintf(OPF, "\\connect %s\n\n", dbname);
- }
- else
- {
- /* Dumping all databases so add --create option. */
- create_opts = "--create";
- }
- }
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
}
else
create_opts = "--create";
On 2025-03-30 Su 12:50 PM, Andrew Dunstan wrote:
On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote:
On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
<alvherre@alvh.no-ip.org> wrote:Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of
characters
in dbname but as per code comments, as of now, we are not
supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the
contents
of map.dat as a shell string. After all, you're not going to
_execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow,
so that
they don't appear as literal newlines/carriage returns.I am confused.
currently pg_dumpall plain format will abort when encountering
dbname
containing newline.
the left dumped plain file does not contain all the cluster
databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be
dumped?am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this
morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
where all the similar stuff belongs, and it feels strange to have this
inline in pg_restore.c. (I also don't like the name much -
SimpleOidStringList or maybe SimpleOidPlusStringList might be better).OK, I have done that, so here is the result. The first two are you
original patches. patch 3 adds the new list type to fe-utils, and patch
4 contains my cleanups and use of the new list type. Apart from some
relatively minor cleanup, the one thing I would like to change is how
dumps are named. If we are producing tar or custom format dumps, I
think
the file names should reflect that (oid.dmp and oid.tar rather than a
bare oid as the filename), and pg_restore should look for those. I'm
going to work on that tomorrow - I don't think it will be terribly
difficult.Thanks Andrew.
Here, I am attaching a delta patch for oid.tar and oid.dmp format.
OK, looks good, I have incorporated that.
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "--
-- Roles
--CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database
"template1" already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database
"postgres" already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3It seems pointless to be trying to create the rolw that we are
connected as, and we also expect template1 and postgres to exist.In a similar vein, I don't see why we are setting the --create flag in
pg_dumpall for those databases. I'm attaching a patch that is designed
to stop that, but it doesn't solve the above issues.I also notice a bunch of these in globals.dat:
--
-- Databases
----
-- Database "template1" dump
----
-- Database "andrew" dump
----
-- Database "isolation_regression_brin" dump
----
-- Database "isolation_regression_delay_execution" dump
--...
The patch also tries to fix this.
Lastly, this badly needs some TAP tests written.
I'm going to work on reviewing the documentation next.
I have reworked the documentation some. See attached.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
dumpall-docs.patch.nocitext/plain; charset=UTF-8; name=dumpall-docs.patch.nociDownload
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..43fdab2d77e 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster using a specified dump format</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an archive. The archive contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +52,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -121,10 +126,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f14e5866f6c 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore a <productname>PostgreSQL</productname> database or cluster
+ from an archive created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by<application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -140,6 +149,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +177,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +348,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
On Sun, 30 Mar 2025 at 22:20, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote:
On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net>
wrote:
On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
wrote:On 2025-03-12 We 3:03 AM, jian he wrote:
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
<alvherre@alvh.no-ip.org> wrote:Hello,
On 2025-Mar-11, Mahendra Singh Thalor wrote:
In map.dat file, I tried to fix this issue by adding number of
characters
in dbname but as per code comments, as of now, we are not
supporting \n\r
in dbnames so i removed handling.
I will do some more study to fix this issue.Yeah, I think this is saying that you should not consider the
contents
of map.dat as a shell string. After all, you're not going to
_execute_
that file via the shell.Maybe for map.dat you need to escape such characters somehow, so
that
they don't appear as literal newlines/carriage returns.
I am confused.
currently pg_dumpall plain format will abort when encountering
dbname
containing newline.
the left dumped plain file does not contain all the cluster
databases data.if pg_dumpall non-text format aborts earlier,
it's aligned with pg_dumpall plain format?
it's also an improvement since aborts earlier, nothing will be
dumped?
am i missing something?
I think we should fix that.
But for the current proposal, Álvaro and I were talking this
morning,
and we thought the simplest thing here would be to have the one line
format and escape NL/CRs in the database name.cheers
Okay. As per discussions, we will keep one line entry for each
database into map.file.Thanks all for feedback and review.
Here, I am attaching updated patches for review and testing. These
patches can be applied on commit a6524105d20b.I'm working through this patch set with a view to committing it.
Attached is some cleanup which is where I got to today, although there
is more to do. One thing I am wondering is why not put the
SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
where all the similar stuff belongs, and it feels strange to have this
inline in pg_restore.c. (I also don't like the name much -
SimpleOidStringList or maybe SimpleOidPlusStringList might be better).OK, I have done that, so here is the result. The first two are you
original patches. patch 3 adds the new list type to fe-utils, and patch
4 contains my cleanups and use of the new list type. Apart from some
relatively minor cleanup, the one thing I would like to change is how
dumps are named. If we are producing tar or custom format dumps, I
think
the file names should reflect that (oid.dmp and oid.tar rather than a
bare oid as the filename), and pg_restore should look for those. I'm
going to work on that tomorrow - I don't think it will be terribly
difficult.Thanks Andrew.
Here, I am attaching a delta patch for oid.tar and oid.dmp format.
OK, looks good, I have incorporated that.
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "--
-- Roles
--CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database "template1"
already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database "postgres"
already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
= 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3It seems pointless to be trying to create the rolw that we are connected
as, and we also expect template1 and postgres to exist.
Thanks Andrew for the updated patches.
Here, I am attaching a delta patch which solves the errors for the
already created database and we need to reset some flags also. Please have
a look over this delta patch and merge it.
If we want to skip errors for connected user (CREATE ROLE username), then
we need to handle it by comparing sql commands in
process_global_sql_commands function or we can compare errors after
executing it.
delta_0002* patch is doing some handling but this is not a proper fix.
I think we can merge delta_0001* and later, we can work on delta_0002.
In a similar vein, I don't see why we are setting the --create flag in
pg_dumpall for those databases. I'm attaching a patch that is designed
to stop that, but it doesn't solve the above issues.I also notice a bunch of these in globals.dat:
--
-- Databases
----
-- Database "template1" dump
----
-- Database "andrew" dump
----
-- Database "isolation_regression_brin" dump
----
-- Database "isolation_regression_delay_execution" dump
--...
The patch also tries to fix this.
Lastly, this badly needs some TAP tests written.
I'm going to work on reviewing the documentation next.
Thank you.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta_0002-pg_restore-skip-error-for-CRETE-ROLE-username.patchapplication/octet-stream; name=delta_0002-pg_restore-skip-error-for-CRETE-ROLE-username.patchDownload
From f795accf5fede15476300a43ea27135708b5d1e1 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 31 Mar 2025 14:59:13 +0530
Subject: [PATCH] pg_restore: skip error for CRETE ROLE username
---
src/bin/pg_dump/pg_restore.c | 32 +++++++++++++++++++++++++++-----
1 file changed, 27 insertions(+), 5 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 3d8be43241d..8ce5b790e51 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -64,7 +64,7 @@ static int read_one_statement(StringInfo inBuf, FILE *pfile);
static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
- const char *outfile);
+ const char *outfile, const char *username);
static void copy_or_print_global_file(const char *outfile, FILE *pfile);
static int get_dbnames_list_to_restore(PGconn *conn,
SimpleOidStringList * dbname_oid_list,
@@ -543,7 +543,7 @@ main(int argc, char **argv)
* commands.
*/
n_errors = process_global_sql_commands(conn, inputFileSpec,
- opts->filename);
+ opts->filename, opts->cparams.username);
if (conn)
PQfinish(conn);
@@ -1123,7 +1123,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* file.
*/
if (dbname_oid_list.head == NULL)
- return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename, opts->cparams.username);
pg_log_info("found total %d database names in map.dat file", num_total_db);
@@ -1153,7 +1153,7 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
db_exclude_patterns);
/* Open global.dat file and execute/append all the global sql commands. */
- n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename, opts->cparams.username);
/* Close the db connection as we are done with globals and patterns. */
if (conn)
@@ -1280,11 +1280,13 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
* returns the number of errors while processing global.dat
*/
static int
-process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile, const char *username)
{
char global_file_path[MAXPGPATH];
PGresult *result;
StringInfoData sqlstatement;
+ StringInfoData rolesqlstatement;
FILE *pfile;
int n_errors = 0;
@@ -1308,10 +1310,30 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
/* Init sqlstatement to append commands. */
initStringInfo(&sqlstatement);
+ initStringInfo(&rolesqlstatement);
/* Process file till EOF and execute sql statements. */
while (read_one_statement(&sqlstatement, pfile) != EOF)
{
+ if (username)
+ {
+ appendStringInfoString(&rolesqlstatement, "\n\n--\n-- Roles\n--\n\nCREATE ROLE ");
+ appendStringInfoString(&rolesqlstatement, username);
+ appendStringInfoString(&rolesqlstatement, ";");
+ }
+
+ /*
+ * If this command is for "CREATE ROLE username", then skip this as
+ * current user is already created.
+ */
+ if (username && strcmp(sqlstatement.data, rolesqlstatement.data) == 0)
+ {
+ resetStringInfo(&rolesqlstatement);
+ continue;
+ }
+
+ resetStringInfo(&rolesqlstatement);
+
pg_log_info("executing query: %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
--
2.39.3
delta-0001-pg_restore-skip-error-if-db-already-created.patchapplication/octet-stream; name=delta-0001-pg_restore-skip-error-if-db-already-created.patchDownload
From eb1bd481d4216bb27db227410cc48edc4bf2634e Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 31 Mar 2025 14:01:41 +0530
Subject: [PATCH] pg_restore: if database is already created, then set createdb
as 0
If database is already created, then set createdb as 0 so that user
will not get errors.
Also reset some flags for each database.
as dumpData, dumpSchema, dumpStatistics
---
src/bin/pg_dump/pg_restore.c | 38 ++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index ce70b7e12b2..3d8be43241d 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -1107,6 +1107,14 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
int num_total_db;
int n_errors_total;
int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+
+ /* Save db name. */
+ if (opts->cparams.dbname)
+ connected_db = pg_strdup(opts->cparams.dbname);
num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
@@ -1209,9 +1217,39 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("restoring database \"%s\"", db_cell->str);
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (conn)
+ {
+ opts->createDB = 0;
+ PQfinish(conn);
+
+ /* Use already created database for connection. */
+ if (opts->cparams.dbname)
+ opts->cparams.dbname = pg_strdup(db_cell->str);
+ }
+ }
+
/* Restore single database. */
n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+ /* Set opts->createDB flag. */
+ if (opts->createDB == 0)
+ {
+ opts->createDB = 1;
+ opts->cparams.dbname = pg_strdup(connected_db);
+ }
+
+ /* Reset flags for next database. */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
/* Print a summary of ignored errors during single database restore. */
if (n_errors)
{
--
2.39.3
On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote:
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "--
-- Roles
--CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database "template1"
already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database "postgres"
already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
= 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3It seems pointless to be trying to create the rolw that we are connected
as, and we also expect template1 and postgres to exist.Thanks Andrew for the updated patches.
Here, I am attaching a delta patch which solves the errors for the
already created database and we need to reset some flags also. Please
have a look over this delta patch and merge it.If we want to skip errors for connected user (CREATE ROLE username),
then we need to handle it by comparing sql commands in
process_global_sql_commands function or we can compare errors after
executing it.
delta_0002* patch is doing some handling but this is not a proper fix.I think we can merge delta_0001* and later, we can work on delta_0002.
Yes, delta 1 looks OK, except that the pstrdup() calls are probably
unnecessary. Delta 2 needs some significant surgery at least. I think we
can use it as at least a partial fix, to avoid trying to create the role
we're running as (Should use PQuser() for that rather than cparams.user).
BTW, if you're sending delta patches, make sure they don't have .patch
extensions. Otherwise, the CFBot gets upset. I usually just add .noci to
the file names.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Mon, 31 Mar 2025 at 19:27, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote:
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "--
-- Roles
--CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database "template1"
already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database "postgres"
already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
= 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3It seems pointless to be trying to create the rolw that we are connected
as, and we also expect template1 and postgres to exist.Thanks Andrew for the updated patches.
Here, I am attaching a delta patch which solves the errors for the
already created database and we need to reset some flags also. Please
have a look over this delta patch and merge it.If we want to skip errors for connected user (CREATE ROLE username),
then we need to handle it by comparing sql commands in
process_global_sql_commands function or we can compare errors after
executing it.
delta_0002* patch is doing some handling but this is not a proper fix.I think we can merge delta_0001* and later, we can work on delta_0002.
Yes, delta 1 looks OK, except that the pstrdup() calls are probably
unnecessary. Delta 2 needs some significant surgery at least. I think we
can use it as at least a partial fix, to avoid trying to create the role
we're running as (Should use PQuser() for that rather than cparams.user).
Thanks for the quick review.
I fixed the above comments and made 2 delta patches. Please have a
look over these.
BTW, if you're sending delta patches, make sure they don't have .patch
extensions. Otherwise, the CFBot gets upset. I usually just add .noci to
the file names.
Sure. I will also use .noci. Thanks for feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta-0001-31march-pg_restore-skip-error-if-db-already-created.nociapplication/octet-stream; name=delta-0001-31march-pg_restore-skip-error-if-db-already-created.nociDownload
From fc8902ed741c83402c1393e6d0677f4b9cbc52fb Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 31 Mar 2025 20:36:25 +0530
Subject: [PATCH 1/2] pg_restore: if database is already created, then set
createdb as 0
If database is already created, then set createdb as 0 so that user
will not get errors.
Also reset some flags for each database.
as dumpData, dumpSchema, dumpStatistics
---
src/bin/pg_dump/pg_restore.c | 37 ++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index ce70b7e12b2..d8037da653f 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -1107,6 +1107,14 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
int num_total_db;
int n_errors_total;
int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
@@ -1209,9 +1217,38 @@ restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
pg_log_info("restoring database \"%s\"", db_cell->str);
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (conn)
+ {
+ opts->createDB = 0;
+ PQfinish(conn);
+
+ /* Use already created database for connection. */
+ opts->cparams.dbname = db_cell->str;
+ }
+ }
+
/* Restore single database. */
n_errors = restoreOneDatabase(subdirpath, opts, numWorkers, true, count);
+ /* Set opts->createDB flag. */
+ if (opts->createDB == 0)
+ {
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+
+ /* Reset flags for next database. */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
/* Print a summary of ignored errors during single database restore. */
if (n_errors)
{
--
2.39.3
delta-0002-31march-pg_restore-skip-error-for-CRETE-ROLE-username.nociapplication/octet-stream; name=delta-0002-31march-pg_restore-skip-error-for-CRETE-ROLE-username.nociDownload
From d75856b200e5ab707f36119b79b258b5e3baa7c7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 31 Mar 2025 21:39:26 +0530
Subject: [PATCH] pg_restore: skip error for CRETE ROLE username
---
---
---
src/bin/pg_dump/pg_restore.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index d8037da653f..4800527a2f3 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -542,8 +542,7 @@ main(int argc, char **argv)
* Open global.dat file and execute/append all the global sql
* commands.
*/
- n_errors = process_global_sql_commands(conn, inputFileSpec,
- opts->filename);
+ n_errors = process_global_sql_commands(conn, inputFileSpec, opts->filename);
if (conn)
PQfinish(conn);
@@ -1284,8 +1283,13 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
char global_file_path[MAXPGPATH];
PGresult *result;
StringInfoData sqlstatement;
+ StringInfoData rolesqlstatement;
FILE *pfile;
int n_errors = 0;
+ bool check_role_cmd = true;
+
+ /* Should have valid connection. */
+ Assert(conn);
snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
@@ -1308,9 +1312,26 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
/* Init sqlstatement to append commands. */
initStringInfo(&sqlstatement);
+ /* Preapre "CREATE ROLE username" command. */
+ initStringInfo(&rolesqlstatement);
+ appendStringInfoString(&rolesqlstatement, "\n\n--\n-- Roles\n--\n\nCREATE ROLE ");
+ appendStringInfoString(&rolesqlstatement, PQuser(conn));
+ appendStringInfoString(&rolesqlstatement, ";");
+
/* Process file till EOF and execute sql statements. */
while (read_one_statement(&sqlstatement, pfile) != EOF)
{
+
+ /*
+ * If this command is for "CREATE ROLE username", then skip this as
+ * current user is already created.
+ */
+ if (check_role_cmd && (strcmp(sqlstatement.data, rolesqlstatement.data) == 0))
+ {
+ check_role_cmd = false;
+ continue;
+ }
+
pg_log_info("executing query: %s", sqlstatement.data);
result = PQexec(conn, sqlstatement.data);
--
2.39.3
On 2025-03-31 Mo 12:16 PM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 19:27, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote:
There are a couple of rough edges, though.
First, I see this:
andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
--exclude-database=regression_dummy_seclabel
--exclude-database=regression_test_extensions
--exclude-database=regression_test_pg_dump dest
pg_restore: error: could not execute query: "ERROR: role "andrew"
already exists
"
Command was: "--
-- Roles
--CREATE ROLE andrew;"
pg_restore: warning: errors ignored on global.dat file restore: 1
pg_restore: error: could not execute query: ERROR: database "template1"
already exists
Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "template1" restore: 1
pg_restore: error: could not execute query: ERROR: database "postgres"
already exists
Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
= 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';pg_restore: warning: errors ignored on database "postgres" restore: 1
pg_restore: warning: errors ignored on restore: 3It seems pointless to be trying to create the rolw that we are connected
as, and we also expect template1 and postgres to exist.Thanks Andrew for the updated patches.
Here, I am attaching a delta patch which solves the errors for the
already created database and we need to reset some flags also. Please
have a look over this delta patch and merge it.If we want to skip errors for connected user (CREATE ROLE username),
then we need to handle it by comparing sql commands in
process_global_sql_commands function or we can compare errors after
executing it.
delta_0002* patch is doing some handling but this is not a proper fix.I think we can merge delta_0001* and later, we can work on delta_0002.
Yes, delta 1 looks OK, except that the pstrdup() calls are probably
unnecessary. Delta 2 needs some significant surgery at least. I think we
can use it as at least a partial fix, to avoid trying to create the role
we're running as (Should use PQuser() for that rather than cparams.user).Thanks for the quick review.
I fixed the above comments and made 2 delta patches. Please have a
look over these.BTW, if you're sending delta patches, make sure they don't have .patch
extensions. Otherwise, the CFBot gets upset. I usually just add .noci to
the file names.Sure. I will also use .noci. Thanks for feedback.
Thanks. Here are patches that contain (my version of) all the cleanups.
With this I get a clean restore run in my test case with no error messages.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2025-03-31 Mo 1:16 PM, Andrew Dunstan wrote:
Thanks. Here are patches that contain (my version of) all the
cleanups. With this I get a clean restore run in my test case with no
error messages.
This time with patches
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
v20250331-0001-Move-common-pg_dump-code-related-to-connec.patchtext/x-patch; charset=UTF-8; name=v20250331-0001-Move-common-pg_dump-code-related-to-connec.patchDownload
From 6286701ff360ccb8c105fa5aa0a8f9bba3b1d1d7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH v20250331 1/3] Move common pg_dump code related to connections
to a new file
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 6 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 79 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 352 insertions(+), 345 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..9e593b70e81
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in
+ * the form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..6c1e1954769
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..25989e8f16b 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..49bc1ee71ef 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,9 +293,9 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect);
+extern void ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
extern PGconn *GetConnection(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 1d131e5a57d..3f59f8f9d9d 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -415,7 +415,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4458,7 +4458,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5076,7 +5076,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..5c349279beb 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect)
+ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4ca34be230c..f84ea6ecc48 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -966,7 +966,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2ea574b0f06..573a8b61a45 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -129,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -499,19 +490,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1738,256 +1732,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.34.1
v20250331-0002-add-new-list-type-simple_oid_string_list-t.patchtext/x-patch; charset=UTF-8; name=v20250331-0002-add-new-list-type-simple_oid_string_list-t.patchDownload
From f172c7db61a75d4bbb586d597ae8e9e44d02642c Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:24 -0400
Subject: [PATCH v20250331 2/3] add new list type simple_oid_string_list to
fe-utils/simple_list
---
src/fe_utils/simple_list.c | 41 ++++++++++++++++++++++++++++++
src/include/fe_utils/simple_list.h | 16 ++++++++++++
2 files changed, 57 insertions(+)
diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c
index 483d5455594..bbcc4ef618d 100644
--- a/src/fe_utils/simple_list.c
+++ b/src/fe_utils/simple_list.c
@@ -192,3 +192,44 @@ simple_ptr_list_destroy(SimplePtrList *list)
cell = next;
}
}
+
+/*
+ * Add to an oid_string list
+ */
+void
+simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = (SimpleOidStringListCell *)
+ pg_malloc(offsetof(SimpleOidStringListCell, str) + strlen(str) + 1);
+
+ cell->next = NULL;
+ cell->oid = oid;
+ strcpy(cell->str, str);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * Destroy an oid_string list
+ */
+void
+simple_oid_string_list_destroy(SimpleOidStringList * list)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = list->head;
+ while (cell != NULL)
+ {
+ SimpleOidStringListCell *next;
+
+ next = cell->next;
+ pg_free(cell);
+ cell = next;
+ }
+}
diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h
index 3b8e38414ec..af61545d7ff 100644
--- a/src/include/fe_utils/simple_list.h
+++ b/src/include/fe_utils/simple_list.h
@@ -55,6 +55,19 @@ typedef struct SimplePtrList
SimplePtrListCell *tail;
} SimplePtrList;
+typedef struct SimpleOidStringListCell
+{
+ struct SimpleOidStringListCell *next;
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} SimpleOidStringListCell;
+
+typedef struct SimpleOidStringList
+{
+ SimpleOidStringListCell *head;
+ SimpleOidStringListCell *tail;
+} SimpleOidStringList;
+
extern void simple_oid_list_append(SimpleOidList *list, Oid val);
extern bool simple_oid_list_member(SimpleOidList *list, Oid val);
extern void simple_oid_list_destroy(SimpleOidList *list);
@@ -68,4 +81,7 @@ extern const char *simple_string_list_not_touched(SimpleStringList *list);
extern void simple_ptr_list_append(SimplePtrList *list, void *ptr);
extern void simple_ptr_list_destroy(SimplePtrList *list);
+extern void simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str);
+extern void simple_oid_string_list_destroy(SimpleOidStringList * list);
+
#endif /* SIMPLE_LIST_H */
--
2.34.1
v20250331-0003-pg_dumpall-with-directory-tar-custom-forma.patchtext/x-patch; charset=UTF-8; name=v20250331-0003-pg_dumpall-with-directory-tar-custom-forma.patchDownload
From 00c534f0e4f501cadb84f2d26b78c92995971f3b Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:30:12 +0530
Subject: [PATCH v20250331 3/3] pg_dumpall with directory|tar|custom format and
restore it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
for each database, we are cleaning on_exit_nicely_index list.
at the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
doc/src/sgml/ref/pg_dumpall.sgml | 86 ++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/parallel.c | 11 +-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 3 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_backup_utils.c | 22 +-
src/bin/pg_dump/pg_backup_utils.h | 3 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 295 ++++++++--
src/bin/pg_dump/pg_restore.c | 802 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
14 files changed, 1231 insertions(+), 94 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..43fdab2d77e 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster using a specified dump format</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an archive. The archive contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +52,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -121,10 +126,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f14e5866f6c 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore a <productname>PostgreSQL</productname> database or cluster
+ from an archive created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by<application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -140,6 +149,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +177,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +348,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..a36d2a5bf84 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -326,11 +326,18 @@ getThreadLocalPQExpBuffer(void)
* pg_dump and pg_restore call this to register the cleanup handler
* as soon as they've created the ArchiveHandle.
*/
-void
+int
on_exit_close_archive(Archive *AHX)
{
shutdown_info.AHX = AHX;
- on_exit_nicely(archive_close_connection, &shutdown_info);
+ return on_exit_nicely(archive_close_connection, &shutdown_info);
+}
+
+void
+replace_on_exit_close_archive(Archive *AHX, int idx)
+{
+ shutdown_info.AHX = AHX;
+ set_on_exit_nicely_entry(archive_close_connection, &shutdown_info, idx);
}
/*
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 49bc1ee71ef..17d6e06ec25 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3f59f8f9d9d..54eb4728928 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -338,9 +338,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -457,7 +462,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1293,7 +1298,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1672,7 +1677,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1692,7 +1698,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..dc045b852e9 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -385,7 +385,8 @@ struct _tocEntry
};
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
-extern void on_exit_close_archive(Archive *AHX);
+extern int on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX, int idx);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_backup_utils.c b/src/bin/pg_dump/pg_backup_utils.c
index 79aec5f5158..59ece2999a8 100644
--- a/src/bin/pg_dump/pg_backup_utils.c
+++ b/src/bin/pg_dump/pg_backup_utils.c
@@ -61,14 +61,26 @@ set_dump_section(const char *arg, int *dumpSections)
/* Register a callback to be run when exit_nicely is invoked. */
-void
+int
on_exit_nicely(on_exit_nicely_callback function, void *arg)
{
- if (on_exit_nicely_index >= MAX_ON_EXIT_NICELY)
- pg_fatal("out of on_exit_nicely slots");
- on_exit_nicely_list[on_exit_nicely_index].function = function;
- on_exit_nicely_list[on_exit_nicely_index].arg = arg;
+ set_on_exit_nicely_entry(function, arg, on_exit_nicely_index);
on_exit_nicely_index++;
+
+ return (on_exit_nicely_index - 1);
+}
+
+void
+set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int i)
+{
+ if (i >= MAX_ON_EXIT_NICELY)
+ pg_fatal("out of on_exit_nicely slots");
+
+ if (i > on_exit_nicely_index)
+ pg_fatal("no entry exists on %d index into on_exit_nicely slots", i);
+
+ on_exit_nicely_list[i].function = function;
+ on_exit_nicely_list[i].arg = arg;
}
/*
diff --git a/src/bin/pg_dump/pg_backup_utils.h b/src/bin/pg_dump/pg_backup_utils.h
index ba042016879..1ce1077096d 100644
--- a/src/bin/pg_dump/pg_backup_utils.h
+++ b/src/bin/pg_dump/pg_backup_utils.h
@@ -28,7 +28,8 @@ typedef void (*on_exit_nicely_callback) (int code, void *arg);
extern const char *progname;
extern void set_dump_section(const char *arg, int *dumpSections);
-extern void on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern int on_exit_nicely(on_exit_nicely_callback function, void *arg);
+extern void set_on_exit_nicely_entry(on_exit_nicely_callback function, void *arg, int idx);
pg_noreturn extern void exit_nicely(int code);
/* In pg_dump, we modify pg_fatal to call exit_nicely instead of exit */
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index f84ea6ecc48..832a6af7091 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1219,7 +1219,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 573a8b61a45..9d08c6ca0e6 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -195,6 +200,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -244,7 +251,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -272,7 +279,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -421,6 +430,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -483,6 +507,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(global_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -522,19 +573,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -634,7 +672,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -647,7 +685,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -658,12 +696,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -969,9 +1009,6 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
@@ -1485,6 +1522,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1496,7 +1534,13 @@ dumpUserConfig(PGconn *conn, const char *username)
res = executeQuery(conn, buf->data);
if (PQntuples(res) > 0)
+ {
+ if (!header_done)
+ fprintf(OPF, "\n--\n-- User Configurations\n--\n");
+ header_done = true;
+
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", username);
+ }
for (int i = 0; i < PQntuples(res); i++)
{
@@ -1570,10 +1614,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1587,18 +1634,42 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1628,12 +1717,9 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char global_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ global_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1663,7 +1760,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1827,3 +1944,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if (!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 47f7b0dd3a1..5ba0c4b768f 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,11 +41,15 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -53,18 +57,36 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
+static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleOidStringList * dbname_oid_list);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
+static int on_exit_index = 0;
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -90,6 +112,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -144,6 +167,7 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -172,7 +196,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -199,11 +223,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -318,6 +345,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -345,6 +375,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -452,6 +489,115 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql
+ * commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should
+ * be specified. Even there is single database in dump, report error
+ * because it might be possible that database hasn't created so better
+ * we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat
+ * file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restore_all_databases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ on_exit_index = 0; /* Reset index. */
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -459,9 +605,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then save index of exit_nicely so that we
+ * can use same slot for all the databases as we already closed the
+ * previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_index = on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH, on_exit_index);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -481,25 +633,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -517,6 +666,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -529,6 +679,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -569,8 +720,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -675,3 +826,620 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * read_one_statement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+read_one_statement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if (c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ bool skip_db_restore = false;
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * the construct pattern matching query: SELECT 1 WHERE XXX
+ * OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from
+ * the dbname_oid_list, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(db_cell->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, quote_literal_cstr(db_cell->str),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ db_cell->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" matches exclude pattern: \"%s\"", db_cell->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", db_cell->str);
+ db_cell->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleOidStringList * dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while ((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u", &db_oid);
+ sscanf(line, "%20s", db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making a
+ * list of all the databases.
+ */
+ simple_oid_string_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleOidStringList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing global.dat
+ * file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_restore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored after
+ * skipping names of exclude-database. Now we can launch parallel workers
+ * to restore these databases.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (db_cell->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ /*
+ * Validate database dump file. If there is .tar or .dmp file exist
+ * then consider particular file, otherwise just append dboid to the
+ * databases folder.
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", db_cell->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = db_cell->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", db_cell->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_oid_string_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement, user_create;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* creation statement for our current role */
+ initStringInfo(&user_create);
+ appendStringInfoString(&user_create, "CREATE ROLE ");
+ /* should use fmtId here, but we don't know the encoding */
+ appendStringInfoString(&user_create, PQuser(conn));
+ appendStringInfoString(&user_create, ";");
+
+ /* Process file till EOF and execute sql statements. */
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
+ {
+ /* don't try to create the role we are connected as */
+ if (strstr(sqlstatement.data, user_create.data))
+ continue;
+
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b66cecd8799..95ec8fbb141 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2732,6 +2732,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.34.1
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)
For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.
Here, I am attaching updated patches.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v25_0002-add-new-list-type-simple_oid_string_list-to-fe-utils.patchapplication/octet-stream; name=v25_0002-add-new-list-type-simple_oid_string_list-to-fe-utils.patchDownload
From a1701016ca6647df22847fb87b369d94bd60ece9 Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:24 -0400
Subject: [PATCH 2/4] add new list type simple_oid_string_list to
fe-utils/simple_list
---
src/fe_utils/simple_list.c | 41 ++++++++++++++++++++++++++++++
src/include/fe_utils/simple_list.h | 16 ++++++++++++
2 files changed, 57 insertions(+)
diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c
index 483d5455594..bbcc4ef618d 100644
--- a/src/fe_utils/simple_list.c
+++ b/src/fe_utils/simple_list.c
@@ -192,3 +192,44 @@ simple_ptr_list_destroy(SimplePtrList *list)
cell = next;
}
}
+
+/*
+ * Add to an oid_string list
+ */
+void
+simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = (SimpleOidStringListCell *)
+ pg_malloc(offsetof(SimpleOidStringListCell, str) + strlen(str) + 1);
+
+ cell->next = NULL;
+ cell->oid = oid;
+ strcpy(cell->str, str);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * Destroy an oid_string list
+ */
+void
+simple_oid_string_list_destroy(SimpleOidStringList * list)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = list->head;
+ while (cell != NULL)
+ {
+ SimpleOidStringListCell *next;
+
+ next = cell->next;
+ pg_free(cell);
+ cell = next;
+ }
+}
diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h
index 3b8e38414ec..af61545d7ff 100644
--- a/src/include/fe_utils/simple_list.h
+++ b/src/include/fe_utils/simple_list.h
@@ -55,6 +55,19 @@ typedef struct SimplePtrList
SimplePtrListCell *tail;
} SimplePtrList;
+typedef struct SimpleOidStringListCell
+{
+ struct SimpleOidStringListCell *next;
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} SimpleOidStringListCell;
+
+typedef struct SimpleOidStringList
+{
+ SimpleOidStringListCell *head;
+ SimpleOidStringListCell *tail;
+} SimpleOidStringList;
+
extern void simple_oid_list_append(SimpleOidList *list, Oid val);
extern bool simple_oid_list_member(SimpleOidList *list, Oid val);
extern void simple_oid_list_destroy(SimpleOidList *list);
@@ -68,4 +81,7 @@ extern const char *simple_string_list_not_touched(SimpleStringList *list);
extern void simple_ptr_list_append(SimplePtrList *list, void *ptr);
extern void simple_ptr_list_destroy(SimplePtrList *list);
+extern void simple_oid_string_list_append(SimpleOidStringList * list, Oid oid, const char *str);
+extern void simple_oid_string_list_destroy(SimpleOidStringList * list);
+
#endif /* SIMPLE_LIST_H */
--
2.39.3
v25_0001-Move-common-pg_dump-code-related-to-connections.patchapplication/octet-stream; name=v25_0001-Move-common-pg_dump-code-related-to-connections.patchDownload
From 259644a2b0d22b286c0f599927135325e8905894 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH 1/4] Move common pg_dump code related to connections to a new
file
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 6 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 79 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 352 insertions(+), 345 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..9e593b70e81
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in
+ * the form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..6c1e1954769
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..25989e8f16b 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..49bc1ee71ef 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,9 +293,9 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect);
+extern void ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
extern PGconn *GetConnection(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 1d131e5a57d..3f59f8f9d9d 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -415,7 +415,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4458,7 +4458,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5076,7 +5076,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..5c349279beb 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect)
+ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4ca34be230c..f84ea6ecc48 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -966,7 +966,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2ea574b0f06..573a8b61a45 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -129,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -499,19 +490,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1738,256 +1732,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.39.3
v25_0004-update-AX-handle-for-each-database-for-cleanup.patchapplication/octet-stream; name=v25_0004-update-AX-handle-for-each-database-for-cleanup.patchDownload
From 8450086079480f3784d26c6c85973b84cfa8b484 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 1 Apr 2025 11:08:38 +0530
Subject: [PATCH 4/4] update AX handle for each database for cleanup.
---
src/bin/pg_dump/parallel.c | 10 ++++++++++
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_restore.c | 10 +++++-----
3 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ed0238cca47 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -386,6 +386,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index aa8887a4eb0..a142b744222 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -603,14 +603,14 @@ restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
* it's still NULL, the cleanup function will just be a no-op. If we are
- * restoring multiple databases, then save index of exit_nicely so that we
- * can use same slot for all the databases as we already closed the
- * previous archive by CloseArchive.
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * previous entry was already in array and we had closed previous
+ * connection so we can use same slot from array.
*/
if (!append_data || num == 0)
on_exit_close_archive(AH);
- //else
- //replace_on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
--
2.39.3
v25_0003-pg_dumpall-with-directory-tar-custom-format.patchapplication/octet-stream; name=v25_0003-pg_dumpall-with-directory-tar-custom-format.patchDownload
From 10411c96b0294d414deaf475ad82a1dd7821be36 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 1 Apr 2025 10:48:52 +0530
Subject: [PATCH 3/4] pg_dumpall with directory|tar|custom format and restore
it by pg_restore
new option to pg_dumpall:
-F, --format=d|t|c|p output file format ( plain text (default))
Ex:1 ./pg_dumpall --format=directory --file=dumpDirName
Ex:2 ./pg_dumpall --format=tar --file=dumpDirName
Ex:3 ./pg_dumpall --format=custom --file=dumpDirName
Ex:4 ./pg_dumpall --format=plain --file=dumpDirName
dumps are as:
global.dat ::: global sql commands in simple plain format
map.dat. ::: dboid dbname ---entries for all databases in simple text form
databases. :::
subdir dboid1 -> toc.dat and data files in archive format
subdir dboid2. -> toc.dat and data files in archive format
etc
---------------------------------------------------------------------------
NOTE:
if needed, restore single db by particular subdir
Ex: ./pg_restore --format=directory -d postgres dumpDirName/databases/5
-- here, 5 is the dboid of postgres db
-- to get dboid, refer dbname in map.file
--------------------------------------------------------------------------
new options to pg_restore:
-g, --globals-only restore only global objects, no databases
--exclude-database=PATTERN exclude database whose name matches pattern
When we give -g/--globals-only option, then only restore globals, no db restoring.
Design:
When --format=d|t|c is specified and there is no toc.dat in main directory, then check
for global.dat to restore all databases. If global.dat file is exist in directory,
then first restore all globals from global.dat and then restore all databases one by one
from map.dat list (if exist)
for --exclude-database=PATTERN for pg_restore
as of now, SELECT 1 WHERE XXX OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
if no db connection, then PATTERN=NAME matching only
At the end of restore, we are giving warning with total number of errors (including global.dat,
and each database errors) and for each database, we are printing warning with dbname and total
errors.
thread:
https://www.postgresql.org/message-id/flat/CAKYtNAp9vOtydXL3_pnGJ%2BTetZtN%3DFYSnZSMCqXceU3mkHPxPg%40mail.gmail.com#066433cb5ae007cbe35fefddf796d52f
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
doc/src/sgml/ref/pg_dumpall.sgml | 86 ++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 295 ++++++++--
src/bin/pg_dump/pg_restore.c | 799 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
src/tools/pgindent/typedefs.list | 2 +
10 files changed, 1198 insertions(+), 85 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..43fdab2d77e 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster using a specified dump format</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an archive. The archive contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +52,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -121,10 +126,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f14e5866f6c 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore a <productname>PostgreSQL</productname> database or cluster
+ from an archive created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by<application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -140,6 +149,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +177,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +348,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 49bc1ee71ef..17d6e06ec25 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3f59f8f9d9d..54eb4728928 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -338,9 +338,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -457,7 +462,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1293,7 +1298,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1672,7 +1677,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1692,7 +1698,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index f84ea6ecc48..832a6af7091 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1219,7 +1219,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 573a8b61a45..9d08c6ca0e6 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -195,6 +200,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -244,7 +251,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -272,7 +279,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -421,6 +430,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -483,6 +507,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(global_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -522,19 +573,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -634,7 +672,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -647,7 +685,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -658,12 +696,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -969,9 +1009,6 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
@@ -1485,6 +1522,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1496,7 +1534,13 @@ dumpUserConfig(PGconn *conn, const char *username)
res = executeQuery(conn, buf->data);
if (PQntuples(res) > 0)
+ {
+ if (!header_done)
+ fprintf(OPF, "\n--\n-- User Configurations\n--\n");
+ header_done = true;
+
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", username);
+ }
for (int i = 0; i < PQntuples(res); i++)
{
@@ -1570,10 +1614,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1587,18 +1634,42 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1628,12 +1717,9 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char global_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ global_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1663,7 +1760,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1827,3 +1944,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if (!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 47f7b0dd3a1..aa8887a4eb0 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,11 +41,15 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -53,18 +57,35 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
+static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleOidStringList * dbname_oid_list);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -90,6 +111,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -144,6 +166,7 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -172,7 +195,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -199,11 +222,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -318,6 +344,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -345,6 +374,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -452,6 +488,113 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql
+ * commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should
+ * be specified. Even there is single database in dump, report error
+ * because it might be possible that database hasn't created so better
+ * we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat
+ * file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restore_all_databases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -459,9 +602,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then save index of exit_nicely so that we
+ * can use same slot for all the databases as we already closed the
+ * previous archive by CloseArchive.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ //else
+ //replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -481,25 +630,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -517,6 +663,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -529,6 +676,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -569,8 +717,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -675,3 +823,620 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * read_one_statement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+read_one_statement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if (c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList * dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ bool skip_db_restore = false;
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * the construct pattern matching query: SELECT 1 WHERE XXX
+ * OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from
+ * the dbname_oid_list, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(db_cell->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, quote_literal_cstr(db_cell->str),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ db_cell->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" matches exclude pattern: \"%s\"", db_cell->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", db_cell->str);
+ db_cell->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleOidStringList * dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while ((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u", &db_oid);
+ sscanf(line, "%20s", db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making a
+ * list of all the databases.
+ */
+ simple_oid_string_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleOidStringList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing global.dat
+ * file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_restore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored after
+ * skipping names of exclude-database. Now we can launch parallel workers
+ * to restore these databases.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (db_cell->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ /*
+ * Validate database dump file. If there is .tar or .dmp file exist
+ * then consider particular file, otherwise just append dboid to the
+ * databases folder.
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", db_cell->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = db_cell->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", db_cell->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_oid_string_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement, user_create;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* creation statement for our current role */
+ initStringInfo(&user_create);
+ appendStringInfoString(&user_create, "CREATE ROLE ");
+ /* should use fmtId here, but we don't know the encoding */
+ appendStringInfoString(&user_create, PQuser(conn));
+ appendStringInfoString(&user_create, ";");
+
+ /* Process file till EOF and execute sql statements. */
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
+ {
+ /* don't try to create the role we are connected as */
+ if (strstr(sqlstatement.data, user_create.data))
+ continue;
+
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..0bbcdbe84a7
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b66cecd8799..95ec8fbb141 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2732,6 +2732,8 @@ ShutdownMode
SignTSVector
SimpleActionList
SimpleActionListCell
+SimpleDatabaseOidList
+SimpleDatabaseOidListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
--
2.39.3
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
v20250403-0001-Move-common-pg_dump-code-related-to-connec.patchtext/x-patch; charset=UTF-8; name=v20250403-0001-Move-common-pg_dump-code-related-to-connec.patchDownload
From aea4ab40f4d461141ba92b50986b68f036d044cb Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 19 Mar 2025 01:18:46 +0530
Subject: [PATCH v20250403 1/4] Move common pg_dump code related to connections
to a new file
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
---
src/bin/pg_dump/Makefile | 5 +-
src/bin/pg_dump/connectdb.c | 294 +++++++++++++++++++++++++++
src/bin/pg_dump/connectdb.h | 26 +++
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/pg_backup.h | 6 +-
src/bin/pg_dump/pg_backup_archiver.c | 6 +-
src/bin/pg_dump/pg_backup_db.c | 79 +------
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 278 +------------------------
9 files changed, 352 insertions(+), 345 deletions(-)
create mode 100644 src/bin/pg_dump/connectdb.c
create mode 100644 src/bin/pg_dump/connectdb.h
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 233ad15ca75..fa795883e9f 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -31,6 +31,7 @@ OBJS = \
compress_lz4.o \
compress_none.o \
compress_zstd.o \
+ connectdb.o \
dumputils.o \
filter.o \
parallel.o \
@@ -50,8 +51,8 @@ pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) | submake-libpq submake-libpg
pg_restore: pg_restore.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
$(CC) $(CFLAGS) pg_restore.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
-pg_dumpall: pg_dumpall.o dumputils.o filter.o $(WIN32RES) | submake-libpq submake-libpgport submake-libpgfeutils
- $(CC) $(CFLAGS) pg_dumpall.o dumputils.o filter.o $(WIN32RES) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+pg_dumpall: pg_dumpall.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+ $(CC) $(CFLAGS) pg_dumpall.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
install: all installdirs
$(INSTALL_PROGRAM) pg_dump$(X) '$(DESTDIR)$(bindir)'/pg_dump$(X)
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
new file mode 100644
index 00000000000..9e593b70e81
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.c
@@ -0,0 +1,294 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.c
+ * This is a common file connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/connect.h"
+#include "common/logging.h"
+#include "common/string.h"
+#include "connectdb.h"
+#include "dumputils.h"
+#include "fe_utils/string_utils.h"
+
+static char *constructConnStr(const char **keywords, const char **values);
+
+/*
+ * ConnectDatabase
+ *
+ * Make a database connection with the given parameters. An
+ * interactive password prompt is automatically issued if required.
+ *
+ * If fail_on_error is false, we return NULL without printing any message
+ * on failure, but preserve any prompted password for the next try.
+ *
+ * On success, the 'connstr' is set to a connection string containing
+ * the options used and 'server_version' is set to version so that caller
+ * can use them.
+ */
+PGconn *
+ConnectDatabase(const char *dbname, const char *connection_string,
+ const char *pghost, const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error, const char *progname,
+ const char **connstr, int *server_version, char *password,
+ char *override_dbname)
+{
+ PGconn *conn;
+ bool new_pass;
+ const char *remoteversion_str;
+ int my_version;
+ const char **keywords = NULL;
+ const char **values = NULL;
+ PQconninfoOption *conn_opts = NULL;
+ int server_version_temp;
+
+ if (prompt_password == TRI_YES && !password)
+ password = simple_prompt("Password: ", false);
+
+ /*
+ * Start the connection. Loop until we have a password if requested by
+ * backend.
+ */
+ do
+ {
+ int argcount = 8;
+ PQconninfoOption *conn_opt;
+ char *err_msg = NULL;
+ int i = 0;
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /*
+ * Merge the connection info inputs given in form of connection string
+ * and other options. Explicitly discard any dbname value in the
+ * connection string; otherwise, PQconnectdbParams() would interpret
+ * that value as being itself a connection string.
+ */
+ if (connection_string)
+ {
+ conn_opts = PQconninfoParse(connection_string, &err_msg);
+ if (conn_opts == NULL)
+ pg_fatal("%s", err_msg);
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ argcount++;
+ }
+
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+
+ for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+ {
+ if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
+ strcmp(conn_opt->keyword, "dbname") != 0)
+ {
+ keywords[i] = conn_opt->keyword;
+ values[i] = conn_opt->val;
+ i++;
+ }
+ }
+ }
+ else
+ {
+ keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
+ values = pg_malloc0((argcount + 1) * sizeof(*values));
+ }
+
+ if (pghost)
+ {
+ keywords[i] = "host";
+ values[i] = pghost;
+ i++;
+ }
+ if (pgport)
+ {
+ keywords[i] = "port";
+ values[i] = pgport;
+ i++;
+ }
+ if (pguser)
+ {
+ keywords[i] = "user";
+ values[i] = pguser;
+ i++;
+ }
+ if (password)
+ {
+ keywords[i] = "password";
+ values[i] = password;
+ i++;
+ }
+ if (dbname)
+ {
+ keywords[i] = "dbname";
+ values[i] = dbname;
+ i++;
+ }
+ if (override_dbname)
+ {
+ keywords[i] = "dbname";
+ values[i++] = override_dbname;
+ }
+
+ keywords[i] = "fallback_application_name";
+ values[i] = progname;
+ i++;
+
+ new_pass = false;
+ conn = PQconnectdbParams(keywords, values, true);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", dbname);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ !password &&
+ prompt_password != TRI_NO)
+ {
+ PQfinish(conn);
+ password = simple_prompt("Password: ", false);
+ new_pass = true;
+ }
+ } while (new_pass);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD)
+ {
+ if (fail_on_error)
+ pg_fatal("%s", PQerrorMessage(conn));
+ else
+ {
+ PQfinish(conn);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ return NULL;
+ }
+ }
+
+ /*
+ * Ok, connected successfully. If requested, remember the options used, in
+ * the form of a connection string.
+ */
+ if (connstr)
+ *connstr = constructConnStr(keywords, values);
+
+ free(keywords);
+ free(values);
+ PQconninfoFree(conn_opts);
+
+ /* Check version */
+ remoteversion_str = PQparameterStatus(conn, "server_version");
+ if (!remoteversion_str)
+ pg_fatal("could not get server version");
+
+ server_version_temp = PQserverVersion(conn);
+ if (server_version_temp == 0)
+ pg_fatal("could not parse server version \"%s\"",
+ remoteversion_str);
+
+ /* If requested, then copy server version to out variable. */
+ if (server_version)
+ *server_version = server_version_temp;
+
+ my_version = PG_VERSION_NUM;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dump.c.)
+ */
+ if (my_version != server_version_temp
+ && (server_version_temp < 90200 ||
+ (server_version_temp / 100) > (my_version / 100)))
+ {
+ pg_log_error("aborting because of server version mismatch");
+ pg_log_error_detail("server version: %s; %s version: %s",
+ remoteversion_str, progname, PG_VERSION);
+ exit_nicely(1);
+ }
+
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+
+ return conn;
+}
+
+/*
+ * constructConnStr
+ *
+ * Construct a connection string from the given keyword/value pairs. It is
+ * used to pass the connection options to the pg_dump subprocess.
+ *
+ * The following parameters are excluded:
+ * dbname - varies in each pg_dump invocation
+ * password - it's not secure to pass a password on the command line
+ * fallback_application_name - we'll let pg_dump set it
+ */
+static char *
+constructConnStr(const char **keywords, const char **values)
+{
+ PQExpBuffer buf = createPQExpBuffer();
+ char *connstr;
+ int i;
+ bool firstkeyword = true;
+
+ /* Construct a new connection string in key='value' format. */
+ for (i = 0; keywords[i] != NULL; i++)
+ {
+ if (strcmp(keywords[i], "dbname") == 0 ||
+ strcmp(keywords[i], "password") == 0 ||
+ strcmp(keywords[i], "fallback_application_name") == 0)
+ continue;
+
+ if (!firstkeyword)
+ appendPQExpBufferChar(buf, ' ');
+ firstkeyword = false;
+ appendPQExpBuffer(buf, "%s=", keywords[i]);
+ appendConnStrVal(buf, values[i]);
+ }
+
+ connstr = pg_strdup(buf->data);
+ destroyPQExpBuffer(buf);
+ return connstr;
+}
+
+/*
+ * executeQuery
+ *
+ * Run a query, return the results, exit program on failure.
+ */
+PGresult *
+executeQuery(PGconn *conn, const char *query)
+{
+ PGresult *res;
+
+ pg_log_info("executing %s", query);
+
+ res = PQexec(conn, query);
+ if (!res ||
+ PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_log_error("query failed: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ return res;
+}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
new file mode 100644
index 00000000000..6c1e1954769
--- /dev/null
+++ b/src/bin/pg_dump/connectdb.h
@@ -0,0 +1,26 @@
+/*-------------------------------------------------------------------------
+ *
+ * connectdb.h
+ * Common header file for connection to the database.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/connectdb.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef CONNECTDB_H
+#define CONNECTDB_H
+
+#include "pg_backup.h"
+#include "pg_backup_utils.h"
+
+extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string, const char *pghost,
+ const char *pgport, const char *pguser,
+ trivalue prompt_password, bool fail_on_error,
+ const char *progname, const char **connstr, int *server_version,
+ char *password, char *override_dbname);
+extern PGresult *executeQuery(PGconn *conn, const char *query);
+#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 603ba6cfbf0..25989e8f16b 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -6,6 +6,7 @@ pg_dump_common_sources = files(
'compress_lz4.c',
'compress_none.c',
'compress_zstd.c',
+ 'connectdb.c',
'dumputils.c',
'filter.c',
'parallel.c',
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 658986de6f8..49bc1ee71ef 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -293,9 +293,9 @@ typedef void (*SetupWorkerPtrType) (Archive *AH);
* Main archiver interface.
*/
-extern void ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect);
+extern void ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect);
extern void DisconnectDatabase(Archive *AHX);
extern PGconn *GetConnection(Archive *AHX);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 1d131e5a57d..3f59f8f9d9d 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -415,7 +415,7 @@ RestoreArchive(Archive *AHX)
AHX->minRemoteVersion = 0;
AHX->maxRemoteVersion = 9999999;
- ConnectDatabase(AHX, &ropt->cparams, false);
+ ConnectDatabaseAhx(AHX, &ropt->cparams, false);
/*
* If we're talking to the DB directly, don't send comments since they
@@ -4458,7 +4458,7 @@ restore_toc_entries_postfork(ArchiveHandle *AH, TocEntry *pending_list)
/*
* Now reconnect the single parent connection.
*/
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
/* re-establish fixed state */
_doSetFixedOutputState(AH);
@@ -5076,7 +5076,7 @@ CloneArchive(ArchiveHandle *AH)
* Connect our new clone object to the database, using the same connection
* parameters used for the original connection.
*/
- ConnectDatabase((Archive *) clone, &clone->public.ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) clone, &clone->public.ropt->cparams, true);
/* re-establish fixed state */
if (AH->mode == archModeRead)
diff --git a/src/bin/pg_dump/pg_backup_db.c b/src/bin/pg_dump/pg_backup_db.c
index 71c55d2466a..5c349279beb 100644
--- a/src/bin/pg_dump/pg_backup_db.c
+++ b/src/bin/pg_dump/pg_backup_db.c
@@ -19,6 +19,7 @@
#include "common/connect.h"
#include "common/string.h"
+#include "connectdb.h"
#include "parallel.h"
#include "pg_backup_archiver.h"
#include "pg_backup_db.h"
@@ -86,9 +87,9 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* ArchiveHandle's connCancel, before closing old connection. Otherwise
* an ill-timed SIGINT could try to access a dead connection.
*/
- AH->connection = NULL; /* dodge error check in ConnectDatabase */
+ AH->connection = NULL; /* dodge error check in ConnectDatabaseAhx */
- ConnectDatabase((Archive *) AH, &ropt->cparams, true);
+ ConnectDatabaseAhx((Archive *) AH, &ropt->cparams, true);
PQfinish(oldConn);
}
@@ -105,14 +106,13 @@ ReconnectToServer(ArchiveHandle *AH, const char *dbname)
* username never does change, so one savedPassword is sufficient.
*/
void
-ConnectDatabase(Archive *AHX,
- const ConnParams *cparams,
- bool isReconnect)
+ConnectDatabaseAhx(Archive *AHX,
+ const ConnParams *cparams,
+ bool isReconnect)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
trivalue prompt_password;
char *password;
- bool new_pass;
if (AH->connection)
pg_fatal("already connected to a database");
@@ -125,69 +125,10 @@ ConnectDatabase(Archive *AHX,
if (prompt_password == TRI_YES && password == NULL)
password = simple_prompt("Password: ", false);
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- const char *keywords[8];
- const char *values[8];
- int i = 0;
-
- /*
- * If dbname is a connstring, its entries can override the other
- * values obtained from cparams; but in turn, override_dbname can
- * override the dbname component of it.
- */
- keywords[i] = "host";
- values[i++] = cparams->pghost;
- keywords[i] = "port";
- values[i++] = cparams->pgport;
- keywords[i] = "user";
- values[i++] = cparams->username;
- keywords[i] = "password";
- values[i++] = password;
- keywords[i] = "dbname";
- values[i++] = cparams->dbname;
- if (cparams->override_dbname)
- {
- keywords[i] = "dbname";
- values[i++] = cparams->override_dbname;
- }
- keywords[i] = "fallback_application_name";
- values[i++] = progname;
- keywords[i] = NULL;
- values[i++] = NULL;
- Assert(i <= lengthof(keywords));
-
- new_pass = false;
- AH->connection = PQconnectdbParams(keywords, values, true);
-
- if (!AH->connection)
- pg_fatal("could not connect to database");
-
- if (PQstatus(AH->connection) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(AH->connection) &&
- password == NULL &&
- prompt_password != TRI_NO)
- {
- PQfinish(AH->connection);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(AH->connection) == CONNECTION_BAD)
- {
- if (isReconnect)
- pg_fatal("reconnection failed: %s",
- PQerrorMessage(AH->connection));
- else
- pg_fatal("%s",
- PQerrorMessage(AH->connection));
- }
+ AH->connection = ConnectDatabase(cparams->dbname, NULL, cparams->pghost,
+ cparams->pgport, cparams->username,
+ prompt_password, true,
+ progname, NULL, NULL, password, cparams->override_dbname);
/* Start strict; later phases may override this. */
PQclear(ExecuteSqlQueryForSingleRow((Archive *) AH,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 04c87ba8854..d90b6183792 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -966,7 +966,7 @@ main(int argc, char **argv)
* Open the database using the Archiver, so it knows about it. Errors mean
* death.
*/
- ConnectDatabase(fout, &dopt.cparams, false);
+ ConnectDatabaseAhx(fout, &dopt.cparams, false);
setup_connection(fout, dumpencoding, dumpsnapshot, use_role);
/*
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 2ea574b0f06..573a8b61a45 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -24,11 +24,11 @@
#include "common/hashfn_unstable.h"
#include "common/logging.h"
#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
-#include "pg_backup.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -71,21 +71,14 @@ static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
PQExpBuffer buffer);
-static PGconn *connectDatabase(const char *dbname,
- const char *connection_string, const char *pghost,
- const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error);
-static char *constructConnStr(const char **keywords, const char **values);
-static PGresult *executeQuery(PGconn *conn, const char *query);
static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static char pg_dump_bin[MAXPGPATH];
-static const char *progname;
static PQExpBuffer pgdumpopts;
-static char *connstr = "";
+static const char *connstr = "";
static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
@@ -129,8 +122,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-#define exit_nicely(code) exit(code)
-
int
main(int argc, char *argv[])
{
@@ -499,19 +490,22 @@ main(int argc, char *argv[])
*/
if (pgdb)
{
- conn = connectDatabase(pgdb, connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase(pgdb, connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", pgdb);
}
else
{
- conn = connectDatabase("postgres", connstr, pghost, pgport, pguser,
- prompt_password, false);
+ conn = ConnectDatabase("postgres", connstr, pghost, pgport, pguser,
+ prompt_password, false,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
- conn = connectDatabase("template1", connstr, pghost, pgport, pguser,
- prompt_password, true);
+ conn = ConnectDatabase("template1", connstr, pghost, pgport, pguser,
+ prompt_password, true,
+ progname, &connstr, &server_version, NULL, NULL);
if (!conn)
{
@@ -1738,256 +1732,6 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
destroyPQExpBuffer(sql);
}
-/*
- * Make a database connection with the given parameters. An
- * interactive password prompt is automatically issued if required.
- *
- * If fail_on_error is false, we return NULL without printing any message
- * on failure, but preserve any prompted password for the next try.
- *
- * On success, the global variable 'connstr' is set to a connection string
- * containing the options used.
- */
-static PGconn *
-connectDatabase(const char *dbname, const char *connection_string,
- const char *pghost, const char *pgport, const char *pguser,
- trivalue prompt_password, bool fail_on_error)
-{
- PGconn *conn;
- bool new_pass;
- const char *remoteversion_str;
- int my_version;
- const char **keywords = NULL;
- const char **values = NULL;
- PQconninfoOption *conn_opts = NULL;
- static char *password = NULL;
-
- if (prompt_password == TRI_YES && !password)
- password = simple_prompt("Password: ", false);
-
- /*
- * Start the connection. Loop until we have a password if requested by
- * backend.
- */
- do
- {
- int argcount = 6;
- PQconninfoOption *conn_opt;
- char *err_msg = NULL;
- int i = 0;
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /*
- * Merge the connection info inputs given in form of connection string
- * and other options. Explicitly discard any dbname value in the
- * connection string; otherwise, PQconnectdbParams() would interpret
- * that value as being itself a connection string.
- */
- if (connection_string)
- {
- conn_opts = PQconninfoParse(connection_string, &err_msg);
- if (conn_opts == NULL)
- pg_fatal("%s", err_msg);
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- argcount++;
- }
-
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
-
- for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
- {
- if (conn_opt->val != NULL && conn_opt->val[0] != '\0' &&
- strcmp(conn_opt->keyword, "dbname") != 0)
- {
- keywords[i] = conn_opt->keyword;
- values[i] = conn_opt->val;
- i++;
- }
- }
- }
- else
- {
- keywords = pg_malloc0((argcount + 1) * sizeof(*keywords));
- values = pg_malloc0((argcount + 1) * sizeof(*values));
- }
-
- if (pghost)
- {
- keywords[i] = "host";
- values[i] = pghost;
- i++;
- }
- if (pgport)
- {
- keywords[i] = "port";
- values[i] = pgport;
- i++;
- }
- if (pguser)
- {
- keywords[i] = "user";
- values[i] = pguser;
- i++;
- }
- if (password)
- {
- keywords[i] = "password";
- values[i] = password;
- i++;
- }
- if (dbname)
- {
- keywords[i] = "dbname";
- values[i] = dbname;
- i++;
- }
- keywords[i] = "fallback_application_name";
- values[i] = progname;
- i++;
-
- new_pass = false;
- conn = PQconnectdbParams(keywords, values, true);
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", dbname);
-
- if (PQstatus(conn) == CONNECTION_BAD &&
- PQconnectionNeedsPassword(conn) &&
- !password &&
- prompt_password != TRI_NO)
- {
- PQfinish(conn);
- password = simple_prompt("Password: ", false);
- new_pass = true;
- }
- } while (new_pass);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD)
- {
- if (fail_on_error)
- pg_fatal("%s", PQerrorMessage(conn));
- else
- {
- PQfinish(conn);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- return NULL;
- }
- }
-
- /*
- * Ok, connected successfully. Remember the options used, in the form of a
- * connection string.
- */
- connstr = constructConnStr(keywords, values);
-
- free(keywords);
- free(values);
- PQconninfoFree(conn_opts);
-
- /* Check version */
- remoteversion_str = PQparameterStatus(conn, "server_version");
- if (!remoteversion_str)
- pg_fatal("could not get server version");
- server_version = PQserverVersion(conn);
- if (server_version == 0)
- pg_fatal("could not parse server version \"%s\"",
- remoteversion_str);
-
- my_version = PG_VERSION_NUM;
-
- /*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dump.c.)
- */
- if (my_version != server_version
- && (server_version < 90200 ||
- (server_version / 100) > (my_version / 100)))
- {
- pg_log_error("aborting because of server version mismatch");
- pg_log_error_detail("server version: %s; %s version: %s",
- remoteversion_str, progname, PG_VERSION);
- exit_nicely(1);
- }
-
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
-
- return conn;
-}
-
-/* ----------
- * Construct a connection string from the given keyword/value pairs. It is
- * used to pass the connection options to the pg_dump subprocess.
- *
- * The following parameters are excluded:
- * dbname - varies in each pg_dump invocation
- * password - it's not secure to pass a password on the command line
- * fallback_application_name - we'll let pg_dump set it
- * ----------
- */
-static char *
-constructConnStr(const char **keywords, const char **values)
-{
- PQExpBuffer buf = createPQExpBuffer();
- char *connstr;
- int i;
- bool firstkeyword = true;
-
- /* Construct a new connection string in key='value' format. */
- for (i = 0; keywords[i] != NULL; i++)
- {
- if (strcmp(keywords[i], "dbname") == 0 ||
- strcmp(keywords[i], "password") == 0 ||
- strcmp(keywords[i], "fallback_application_name") == 0)
- continue;
-
- if (!firstkeyword)
- appendPQExpBufferChar(buf, ' ');
- firstkeyword = false;
- appendPQExpBuffer(buf, "%s=", keywords[i]);
- appendConnStrVal(buf, values[i]);
- }
-
- connstr = pg_strdup(buf->data);
- destroyPQExpBuffer(buf);
- return connstr;
-}
-
-/*
- * Run a query, return the results, exit program on failure.
- */
-static PGresult *
-executeQuery(PGconn *conn, const char *query)
-{
- PGresult *res;
-
- pg_log_info("executing %s", query);
-
- res = PQexec(conn, query);
- if (!res ||
- PQresultStatus(res) != PGRES_TUPLES_OK)
- {
- pg_log_error("query failed: %s", PQerrorMessage(conn));
- pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- return res;
-}
-
/*
* As above for a SQL command (which returns nothing).
*/
--
2.34.1
v20250403-0002-add-new-list-type-simple_oid_string_list-t.patchtext/x-patch; charset=UTF-8; name=v20250403-0002-add-new-list-type-simple_oid_string_list-t.patchDownload
From 367ecd4f6870f9402a2254152bbf7e97ff1f2a23 Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Fri, 28 Mar 2025 18:10:24 -0400
Subject: [PATCH v20250403 2/4] add new list type simple_oid_string_list to
fe-utils/simple_list
This type contains both an oid and a string.
This will be used in forthcoming changes to pg_restore.
Author: Andrew Dunstan <andrew@dunslane.net>
---
src/fe_utils/simple_list.c | 41 ++++++++++++++++++++++++++++++
src/include/fe_utils/simple_list.h | 16 ++++++++++++
src/tools/pgindent/typedefs.list | 2 ++
3 files changed, 59 insertions(+)
diff --git a/src/fe_utils/simple_list.c b/src/fe_utils/simple_list.c
index 483d5455594..b0686e57c4a 100644
--- a/src/fe_utils/simple_list.c
+++ b/src/fe_utils/simple_list.c
@@ -192,3 +192,44 @@ simple_ptr_list_destroy(SimplePtrList *list)
cell = next;
}
}
+
+/*
+ * Add to an oid_string list
+ */
+void
+simple_oid_string_list_append(SimpleOidStringList *list, Oid oid, const char *str)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = (SimpleOidStringListCell *)
+ pg_malloc(offsetof(SimpleOidStringListCell, str) + strlen(str) + 1);
+
+ cell->next = NULL;
+ cell->oid = oid;
+ strcpy(cell->str, str);
+
+ if (list->tail)
+ list->tail->next = cell;
+ else
+ list->head = cell;
+ list->tail = cell;
+}
+
+/*
+ * Destroy an oid_string list
+ */
+void
+simple_oid_string_list_destroy(SimpleOidStringList *list)
+{
+ SimpleOidStringListCell *cell;
+
+ cell = list->head;
+ while (cell != NULL)
+ {
+ SimpleOidStringListCell *next;
+
+ next = cell->next;
+ pg_free(cell);
+ cell = next;
+ }
+}
diff --git a/src/include/fe_utils/simple_list.h b/src/include/fe_utils/simple_list.h
index 3b8e38414ec..a5373932555 100644
--- a/src/include/fe_utils/simple_list.h
+++ b/src/include/fe_utils/simple_list.h
@@ -55,6 +55,19 @@ typedef struct SimplePtrList
SimplePtrListCell *tail;
} SimplePtrList;
+typedef struct SimpleOidStringListCell
+{
+ struct SimpleOidStringListCell *next;
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} SimpleOidStringListCell;
+
+typedef struct SimpleOidStringList
+{
+ SimpleOidStringListCell *head;
+ SimpleOidStringListCell *tail;
+} SimpleOidStringList;
+
extern void simple_oid_list_append(SimpleOidList *list, Oid val);
extern bool simple_oid_list_member(SimpleOidList *list, Oid val);
extern void simple_oid_list_destroy(SimpleOidList *list);
@@ -68,4 +81,7 @@ extern const char *simple_string_list_not_touched(SimpleStringList *list);
extern void simple_ptr_list_append(SimplePtrList *list, void *ptr);
extern void simple_ptr_list_destroy(SimplePtrList *list);
+extern void simple_oid_string_list_append(SimpleOidStringList *list, Oid oid, const char *str);
+extern void simple_oid_string_list_destroy(SimpleOidStringList *list);
+
#endif /* SIMPLE_LIST_H */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8f28d8ff28e..8be62c9216a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2744,6 +2744,8 @@ SimpleActionListCell
SimpleEcontextStackEntry
SimpleOidList
SimpleOidListCell
+SimpleOidStringList
+SimpleOidListStringCell
SimplePtrList
SimplePtrListCell
SimpleStats
--
2.34.1
v20250403-0003-Non-text-modes-for-pg_dumpall-correspondin.patchtext/x-patch; charset=UTF-8; name=v20250403-0003-Non-text-modes-for-pg_dumpall-correspondin.patchDownload
From 76214ce8a2a767a68f30e72f4f79f85880241e52 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 1 Apr 2025 10:48:52 +0530
Subject: [PATCH v20250403 3/4] Non text modes for pg_dumpall, correspondingly
change pg_restore
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, globals.data and map.dat. The
first contains SQL for restoring the global data, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing globals.dat, and no
toc.dat, it restores the global settings and then restores each
database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Srinath Reddy <srinath2133@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/cb103623-8ee6-4ba5-a2c9-f32e3a4933fa@dunslane.net
---
doc/src/sgml/ref/pg_dumpall.sgml | 86 ++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 20 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 295 ++++++++--
src/bin/pg_dump/pg_restore.c | 800 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 9 +
11 files changed, 1208 insertions(+), 85 deletions(-)
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 765b30a3a66..43fdab2d77e 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster using a specified dump format</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an archive. The archive contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +52,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -121,10 +126,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index c840a807ae9..f14e5866f6c 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore a <productname>PostgreSQL</productname> database or cluster
+ from an archive created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by<application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -140,6 +149,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -166,6 +177,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-e</option></term>
<term><option>--exit-on-error</option></term>
@@ -315,6 +348,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 49bc1ee71ef..17d6e06ec25 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -311,7 +311,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 3f59f8f9d9d..54eb4728928 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -85,7 +85,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -338,9 +338,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -457,7 +462,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1293,7 +1298,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1672,7 +1677,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1692,7 +1698,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index a2064f471ed..ed0238cca47 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -386,6 +386,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..d94d0de2a5d 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index d90b6183792..9dcda63b4b8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1219,7 +1219,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 573a8b61a45..248afc4be28 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -15,6 +15,7 @@
#include "postgres_fe.h"
+#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
@@ -64,9 +65,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -75,6 +77,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static void create_or_open_dir(const char *dirname);
+static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -146,6 +150,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -195,6 +200,8 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ ArchiveFormat archDumpFormat = archNull;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -244,7 +251,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -272,7 +279,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -421,6 +430,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -483,6 +507,33 @@ main(int argc, char *argv[])
if (statistics_only)
appendPQExpBufferStr(pgdumpopts, " --statistics-only");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory and global.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+
+ OPF = fopen(global_path, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open global.dat file: %s", strerror(errno));
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -522,19 +573,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -634,7 +672,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
PQfinish(conn);
@@ -647,7 +685,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -658,12 +696,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster based on specified dump format.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -969,9 +1009,6 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
@@ -1485,6 +1522,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1496,7 +1534,13 @@ dumpUserConfig(PGconn *conn, const char *username)
res = executeQuery(conn, buf->data);
if (PQntuples(res) > 0)
+ {
+ if (!header_done)
+ fprintf(OPF, "\n--\n-- User Configurations\n--\n");
+ header_done = true;
+
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", username);
+ }
for (int i = 0; i < PQntuples(res); i++)
{
@@ -1570,10 +1614,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1587,18 +1634,42 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, 0755) != 0)
+ pg_fatal("could not create subdirectory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open map file: %s", strerror(errno));
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
pg_log_info("dumping database \"%s\"", dbname);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1628,12 +1717,9 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- OPF = fopen(filename, PG_BINARY_A);
+ char global_path[MAXPGPATH];
+
+ if (archDumpFormat != archNull)
+ snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+
+ OPF = fopen(global_path, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- filename);
+ global_path);
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1663,7 +1760,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1827,3 +1944,91 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * create_or_open_dir
+ *
+ * This will create a new directory with given name. If there is already same
+ * empty directory exist, then use it.
+ */
+static void
+create_or_open_dir(const char *dirname)
+{
+ struct stat st;
+ bool is_empty = false;
+
+ /* we accept an empty existing directory */
+ if (stat(dirname, &st) == 0 && S_ISDIR(st.st_mode))
+ {
+ DIR *dir = opendir(dirname);
+
+ if (dir)
+ {
+ struct dirent *d;
+
+ is_empty = true;
+
+ while (errno = 0, (d = readdir(dir)))
+ {
+ if (strcmp(d->d_name, ".") != 0 && strcmp(d->d_name, "..") != 0)
+ {
+ is_empty = false;
+ break;
+ }
+ }
+
+ if (errno)
+ pg_fatal("could not read directory \"%s\": %m",
+ dirname);
+
+ if (closedir(dir))
+ pg_fatal("could not close directory \"%s\": %m",
+ dirname);
+ }
+
+ if (!is_empty)
+ {
+ pg_log_error("directory \"%s\" exists but is not empty", dirname);
+ pg_log_error_hint("If you want to dump data on this directory, either remove or empty "
+ "this directory \"%s\" or run %s "
+ "with an argument other than \"%s\".",
+ dirname, progname, dirname);
+ exit_nicely(1);
+ }
+ }
+ else if (mkdir(dirname, 0700) < 0)
+ pg_fatal("could not create directory \"%s\": %m", dirname);
+}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized archive format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 47f7b0dd3a1..175f28a7421 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,11 +41,15 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -53,18 +57,35 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num);
+static int read_one_statement(StringInfo inBuf, FILE *pfile);
+static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
+ const char *outfile);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimpleOidStringList *dbname_oid_list);
+static size_t quote_literal_internal(char *dst, const char *src, size_t len);
+static char *quote_literal_cstr(const char *rawstr);
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -90,6 +111,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -144,6 +166,7 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
+ {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -172,7 +195,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -199,11 +222,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -318,6 +344,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 6: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
default:
/* getopt_long already emitted a complaint */
@@ -345,6 +374,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -452,6 +488,113 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.dat file does not present in current path, then check for
+ * global.dat. If global.dat file is present, then restore all the
+ * databases from map.dat(if exist) file list and skip restoring for
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
+ file_exists_in_directory(inputFileSpec, "global.dat"))
+ {
+ PGconn *conn = NULL; /* Connection to restore global sql
+ * commands. */
+
+ /*
+ * User is suggested to use single database dump for --list option.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring multiple databases by archive of pg_dumpall");
+
+ /*
+ * To restore multiple databases, -C (create database) option should
+ * be specified. Even there is single database in dump, report error
+ * because it might be possible that database hasn't created so better
+ * we report error.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("-C/--create option should be specified when restoring multiple databases by archive of pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("If db is already created and dump has single db dump, then use particular dump file.");
+ exit_nicely(1);
+ }
+
+ /*
+ * Connect to database to execute global sql commands from global.dat
+ * file.
+ */
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ /*
+ * Open global.dat file and execute/append all the global sql
+ * commands.
+ */
+ n_errors = process_global_sql_commands(conn, inputFileSpec,
+ opts->filename);
+
+ if (conn)
+ PQfinish(conn);
+
+ pg_log_info("databases restoring is skipped as -g/--globals-only option is specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat file. */
+ n_errors = restore_all_databases(conn, inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if global.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall");
+
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -459,9 +602,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * previous entry was already in array and we had closed previous
+ * connection so we can use same slot from array.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -481,25 +630,22 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n"
+ "If archive is created by pg_dumpall, then restores multiple databases also. \n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -517,6 +663,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -529,6 +676,7 @@ usage(const char *progname)
printf(_(" -S, --superuser=NAME superuser user name to use for disabling triggers\n"));
printf(_(" -t, --table=NAME restore named relation (table, view, etc.)\n"));
printf(_(" -T, --trigger=NAME restore named trigger\n"));
+ printf(_(" --exclude-database=PATTERN exclude databases whose name matches with pattern\n"));
printf(_(" -x, --no-privileges skip restoration of access privileges (grant/revoke)\n"));
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
@@ -569,8 +717,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be combined\n"
+ "and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -675,3 +823,621 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if file exist in current directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * read_one_statement
+ *
+ * This will start reading from passed file pointer using fgetc and read till
+ * semicolon(sql statement terminator for global.dat file)
+ *
+ * EOF is returned if end-of-file input is seen; time to shut down.
+ */
+
+static int
+read_one_statement(StringInfo inBuf, FILE *pfile)
+{
+ int c; /* character read from getc() */
+ int m;
+
+ StringInfoData q;
+
+ initStringInfo(&q);
+
+ resetStringInfo(inBuf);
+
+ /*
+ * Read characters until EOF or the appropriate delimiter is seen.
+ */
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != '\n' && c != ';')
+ {
+ appendStringInfoChar(inBuf, (char) c);
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ if (c != '\'' && c != '"' && c != ';' && c != '\n')
+ appendStringInfoChar(inBuf, (char) c);
+ else
+ break;
+ }
+ }
+
+ if (c == '\'' || c == '"')
+ {
+ appendStringInfoChar(&q, (char) c);
+ m = c;
+
+ while ((c = fgetc(pfile)) != EOF)
+ {
+ appendStringInfoChar(&q, (char) c);
+
+ if (c == m)
+ {
+ appendStringInfoString(inBuf, q.data);
+ resetStringInfo(&q);
+ break;
+ }
+ }
+ }
+
+ if (c == ';')
+ {
+ appendStringInfoChar(inBuf, (char) ';');
+ break;
+ }
+
+ if (c == '\n')
+ appendStringInfoChar(inBuf, (char) '\n');
+ }
+
+ /* No input before EOF signal means time to quit. */
+ if (c == EOF && inBuf->len == 0)
+ return EOF;
+
+ /* Add '\0' to make it look the same as message case. */
+ appendStringInfoChar(inBuf, (char) '\0');
+
+ return 'Q';
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimpleOidStringList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ bool skip_db_restore = false;
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * the construct pattern matching query: SELECT 1 WHERE XXX
+ * OPERATOR(pg_catalog.~) '^(PATTERN)$' COLLATE pg_catalog.default
+ *
+ * XXX represents the string literal database name derived from
+ * the dbname_oid_list, which is initially extracted from the
+ * map.dat file located in the backup directory. that's why we
+ * need quote_literal_cstr.
+ *
+ * If no db connection, then consider PATTERN as NAME.
+ */
+ if (pg_strcasecmp(db_cell->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, quote_literal_cstr(db_cell->str),
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ db_cell->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database \"%s\" matches exclude pattern: \"%s\"", db_cell->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ /* Increment count if database needs to be restored. */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", db_cell->str);
+ db_cell->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleOidStringList *dbname_oid_list)
+{
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ char line[MAXPGPATH];
+ int count = 0;
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open map.dat file: \"%s\"", map_file_path);
+
+ /* Append all the dbname and db_oid to the list. */
+ while ((fgets(line, MAXPGPATH, pfile)) != NULL)
+ {
+ Oid db_oid = InvalidOid;
+ char db_oid_str[MAXPGPATH + 1] = {'\0'};
+ char dbname[MAXPGPATH + 1] = {'\0'};
+
+ /* Extract dboid. */
+ sscanf(line, "%u", &db_oid);
+ sscanf(line, "%20s", db_oid_str);
+
+ /* Now copy dbname. */
+ strcpy(dbname, line + strlen(db_oid_str) + 1);
+
+ /* Remove \n from dbanme. */
+ dbname[strlen(dbname) - 1] = '\0';
+
+ pg_log_info("found database \"%s\" (OID: %u) in map.dat file while restoring.", dbname, db_oid);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || strlen(dbname) == 0)
+ pg_fatal("invalid entry in map.dat file at line : %d", count + 1);
+
+ /*
+ * XXX : before adding dbname into list, we can verify that this db
+ * needs to skipped for restore or not but as of now, we are making a
+ * list of all the databases.
+ */
+ simple_oid_string_list_append(dbname_oid_list, db_oid, dbname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(PGconn *conn, const char *dumpdirpath,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimpleOidStringList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
+
+ /*
+ * If map.dat has no entry, return from here after processing global.dat
+ * file.
+ */
+ if (dbname_oid_list.head == NULL)
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ pg_log_info("found total %d database names in map.dat file", num_total_db);
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect database \"template1\" as failed to connect to database \"postgres\" to dump into out file");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+
+ /*
+ * processing pg_restore --exclude-database=PATTERN/NAME if no connection.
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Open global.dat file and execute/append all the global sql commands. */
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info("no database needs to restore out of %d databases", num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("needs to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * Till now, we made a list of databases, those needs to be restored after
+ * skipping names of exclude-database. Now we can launch parallel workers
+ * to restore these databases.
+ */
+ for (SimpleOidStringListCell * db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (db_cell->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ /*
+ * Validate database dump file. If there is .tar or .dmp file exist
+ * then consider particular file, otherwise just append dboid to the
+ * databases folder.
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", db_cell->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = db_cell->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, count);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", db_cell->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases are %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_oid_string_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
+
+/*
+ * process_global_sql_commands
+ *
+ * This will open global.dat file and will execute all global sql commands one
+ * by one statement.
+ * Semicolon is considered as statement terminator. If outfile is passed, then
+ * this will copy all sql commands into outfile rather then executing them.
+ *
+ * returns the number of errors while processing global.dat
+ */
+static int
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+{
+ char global_file_path[MAXPGPATH];
+ PGresult *result;
+ StringInfoData sqlstatement,
+ user_create;
+ FILE *pfile;
+ int n_errors = 0;
+
+ snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+
+ /* Open global.dat file. */
+ pfile = fopen(global_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open global.dat file: \"%s\"", global_file_path);
+
+ /*
+ * If outfile is given, then just copy all global.dat file data into
+ * outfile.
+ */
+ if (outfile)
+ {
+ copy_or_print_global_file(outfile, pfile);
+ return 0;
+ }
+
+ /* Init sqlstatement to append commands. */
+ initStringInfo(&sqlstatement);
+
+ /* creation statement for our current role */
+ initStringInfo(&user_create);
+ appendStringInfoString(&user_create, "CREATE ROLE ");
+ /* should use fmtId here, but we don't know the encoding */
+ appendStringInfoString(&user_create, PQuser(conn));
+ appendStringInfoString(&user_create, ";");
+
+ /* Process file till EOF and execute sql statements. */
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
+ {
+ /* don't try to create the role we are connected as */
+ if (strstr(sqlstatement.data, user_create.data))
+ continue;
+
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: \"%s\" \nCommand was: \"%s\"", PQerrorMessage(conn), sqlstatement.data);
+ }
+ PQclear(result);
+ }
+
+ /* Print a summary of ignored errors during global.dat. */
+ if (n_errors)
+ pg_log_warning("errors ignored on global.dat file restore: %d", n_errors);
+
+ fclose(pfile);
+
+ return n_errors;
+}
+
+/*
+ * copy_or_print_global_file
+ *
+ * This will copy global.dat file into out file. If "-" is used as outfile,
+ * then print commands to the stdout.
+ */
+static void
+copy_or_print_global_file(const char *outfile, FILE *pfile)
+{
+ char out_file_path[MAXPGPATH];
+ FILE *OPF;
+ int c;
+
+ /* "-" is used for stdout. */
+ if (strcmp(outfile, "-") == 0)
+ OPF = stdout;
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ OPF = fopen(out_file_path, PG_BINARY_W);
+
+ if (OPF == NULL)
+ {
+ fclose(pfile);
+ pg_fatal("could not open file: \"%s\"", outfile);
+ }
+ }
+
+ /* Append global.dat into out file or print to the stdout. */
+ while ((c = fgetc(pfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(pfile);
+
+ /* Close out file. */
+ if (strcmp(outfile, "-") != 0)
+ fclose(OPF);
+}
+
+/*
+ * quote_literal_internal
+ */
+static size_t
+quote_literal_internal(char *dst, const char *src, size_t len)
+{
+ const char *s;
+ char *savedst = dst;
+
+ for (s = src; s < src + len; s++)
+ {
+ if (*s == '\\')
+ {
+ *dst++ = ESCAPE_STRING_SYNTAX;
+ break;
+ }
+ }
+
+ *dst++ = '\'';
+ while (len-- > 0)
+ {
+ if (SQL_STR_DOUBLE(*src, true))
+ *dst++ = *src;
+ *dst++ = *src++;
+ }
+ *dst++ = '\'';
+
+ return dst - savedst;
+}
+
+/*
+ * quote_literal_cstr
+ *
+ * returns a properly quoted literal
+ * copied from src/backend/utils/adt/quote.c
+ */
+static char *
+quote_literal_cstr(const char *rawstr)
+{
+ char *result;
+ int len;
+ int newlen;
+
+ len = strlen(rawstr);
+
+ /* We make a worst-case result area; wasting a little space is OK */
+ result = pg_malloc(len * 2 + 3 + 1);
+
+ newlen = quote_literal_internal(result, rawstr, len);
+ result[newlen] = '\0';
+
+ return result;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 37d893d5e6a..0bbcdbe84a7 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,11 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +249,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized archive format "x";\E/,
+ 'pg_dumpall: unrecognized archive format');
done_testing();
--
2.34.1
v20250403-0004-Add-more-TAP-tests-for-pg_dumpall.patchtext/x-patch; charset=UTF-8; name=v20250403-0004-Add-more-TAP-tests-for-pg_dumpall.patchDownload
From ddf415212e67d450d00e7357e38c5e784d62eeb3 Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Thu, 3 Apr 2025 14:45:52 -0400
Subject: [PATCH v20250403 4/4] Add more TAP tests for pg_dumpall
Author: Matheus Alcantara <matheusssilv97@gmail.com>
---
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/t/006_pg_dumpall.pl | 331 ++++++++++++++++++++++++++++
2 files changed, 332 insertions(+)
create mode 100644 src/bin/pg_dump/t/006_pg_dumpall.pl
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 25989e8f16b..d8e9e101254 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -102,6 +102,7 @@ tests += {
't/003_pg_dump_with_server.pl',
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
+ 't/006_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
new file mode 100644
index 00000000000..fdfd1ae990b
--- /dev/null
+++ b/src/bin/pg_dump/t/006_pg_dumpall.pl
@@ -0,0 +1,331 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ \s*\QALTER ROLE dumpall WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'SCRAM-SHA-256\E
+ [^']+';\s*\n
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl1 OWNER tap LOCATION \E(?:E)?\Q'$tablespace1';\E
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ CREATE TABLE t2 (id int);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ CREATE TABLE t4 (id int);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ CREATE TABLE t6 (id int);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ CREATE TABLE t8 (id int);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ CREATE TABLE t10 (id int);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql =>
+ 'CREATE TABLE format_directory(a int, b boolean, c text);',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCREATE TABLE public.format_directory (/xm
+ },
+
+ format_tar => {
+ setup_sql => 'CREATE TABLE format_tar(id int);',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCREATE TABLE public.format_tar (/xm
+ },
+
+ format_custom => {
+ setup_sql => 'CREATE TABLE format_custom(a int, b boolean, c text);',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCREATE TABLE public.format_custom (/xm
+ },);
+
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if (!($pgdumpall_runs{$run}->{like}) && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+$node->stop('fast');
+
+done_testing();
--
2.34.1
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Thanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta_0004-some-negative-TAP-test-case-for-pg_restore-when-dump.nociapplication/octet-stream; name=delta_0004-some-negative-TAP-test-case-for-pg_restore-when-dump.nociDownload
From 6ca50003f084b6d01f1d9ee8bc9dd56369ca4054 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Fri, 4 Apr 2025 13:48:30 +0530
Subject: [PATCH] some negative TAP-test case for pg_restore when dump of
pg_dumpall is used.
---
src/bin/pg_dump/t/006_pg_dumpall.pl | 76 ++++++++++++++++++++++++++---
1 file changed, 68 insertions(+), 8 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/006_pg_dumpall.pl
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
old mode 100644
new mode 100755
index fdfd1ae990b..44afdf525ff
--- a/src/bin/pg_dump/t/006_pg_dumpall.pl
+++ b/src/bin/pg_dump/t/006_pg_dumpall.pl
@@ -115,6 +115,7 @@ my %pgdumpall_runs = (
CREATE ROLE grant8;
CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
GRANT SELECT ON TABLE t TO grant1;
GRANT INSERT ON TABLE t TO grant2;
@@ -157,27 +158,37 @@ my %pgdumpall_runs = (
setup_sql => 'CREATE DATABASE db1;
\c db1
CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
CREATE DATABASE db2;
\c db2
CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex3;
\c dbex3
CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex4;
\c dbex4
CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
CREATE DATABASE db5;
\c db5
CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
',
dump_cmd => [
'pg_dumpall',
@@ -225,8 +236,8 @@ my %pgdumpall_runs = (
},
format_directory => {
- setup_sql =>
- 'CREATE TABLE format_directory(a int, b boolean, c text);',
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
@@ -238,11 +249,12 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_directory.sql",
"$tempdir/format_directory",
],
- like => qr/^\n\QCREATE TABLE public.format_directory (/xm
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
},
format_tar => {
- setup_sql => 'CREATE TABLE format_tar(id int);',
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
dump_cmd => [
'pg_dumpall',
'--format' => 'tar',
@@ -254,11 +266,12 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_tar.sql",
"$tempdir/format_tar",
],
- like => qr/^\n\QCREATE TABLE public.format_tar (/xm
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
},
format_custom => {
- setup_sql => 'CREATE TABLE format_custom(a int, b boolean, c text);',
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
dump_cmd => [
'pg_dumpall',
'--format' => 'custom',
@@ -270,9 +283,28 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_custom.sql",
"$tempdir/format_custom",
],
- like => qr/^ \n\QCREATE TABLE public.format_custom (/xm
- },);
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ }, );
# First execute the setup_sql
foreach my $run (sort keys %pgdumpall_runs)
@@ -326,6 +358,34 @@ foreach my $run (sort keys %pgdumpall_runs)
}
}
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql", ],
+ qr/\Qpg_restore: error: -C\/--create option should be specified when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom', '--list',
+ '--file' => "$tempdir/error_test.sql", ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq', ],
+ qr/\Qpg_restore: error: could not connect to database "dbpq"\E/,
+ 'When non-exist datbase is given with -d option in pg_restore with dump of pg_dumpall');
+
$node->stop('fast');
done_testing();
--
2.39.3
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
Here, I am attaching an updated delta patch which has some more TAP
tests. Please include these tests also. This patch can be applied on
v20250403_0004* patch.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta_20250403-add-some-more-TAP-test-for-pg_restore-and-pg_dumpall.nociapplication/octet-stream; name=delta_20250403-add-some-more-TAP-test-for-pg_restore-and-pg_dumpall.nociDownload
From a44943d6925aaffa1cd1d0b2d96e65466198278c Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Fri, 4 Apr 2025 14:36:31 +0530
Subject: [PATCH] add some more TAP-test for pg_restore and pg_dumpall for
non-text mode of pg_dumpall
---
src/bin/pg_dump/t/001_basic.pl | 10 ++++
src/bin/pg_dump/t/006_pg_dumpall.pl | 76 ++++++++++++++++++++++++++---
2 files changed, 78 insertions(+), 8 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
mode change 100644 => 100755 src/bin/pg_dump/t/006_pg_dumpall.pl
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 0bbcdbe84a7..113a915bfbf
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -242,6 +242,16 @@ command_fails_like(
qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
'pg_restore: option --exclude-database cannot be used together with -g/--globals-only');
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump');
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump');
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
old mode 100644
new mode 100755
index fdfd1ae990b..44afdf525ff
--- a/src/bin/pg_dump/t/006_pg_dumpall.pl
+++ b/src/bin/pg_dump/t/006_pg_dumpall.pl
@@ -115,6 +115,7 @@ my %pgdumpall_runs = (
CREATE ROLE grant8;
CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
GRANT SELECT ON TABLE t TO grant1;
GRANT INSERT ON TABLE t TO grant2;
@@ -157,27 +158,37 @@ my %pgdumpall_runs = (
setup_sql => 'CREATE DATABASE db1;
\c db1
CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
CREATE DATABASE db2;
\c db2
CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex3;
\c dbex3
CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
CREATE DATABASE dbex4;
\c dbex4
CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
CREATE DATABASE db5;
\c db5
CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
',
dump_cmd => [
'pg_dumpall',
@@ -225,8 +236,8 @@ my %pgdumpall_runs = (
},
format_directory => {
- setup_sql =>
- 'CREATE TABLE format_directory(a int, b boolean, c text);',
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
dump_cmd => [
'pg_dumpall',
'--format' => 'directory',
@@ -238,11 +249,12 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_directory.sql",
"$tempdir/format_directory",
],
- like => qr/^\n\QCREATE TABLE public.format_directory (/xm
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
},
format_tar => {
- setup_sql => 'CREATE TABLE format_tar(id int);',
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
dump_cmd => [
'pg_dumpall',
'--format' => 'tar',
@@ -254,11 +266,12 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_tar.sql",
"$tempdir/format_tar",
],
- like => qr/^\n\QCREATE TABLE public.format_tar (/xm
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
},
format_custom => {
- setup_sql => 'CREATE TABLE format_custom(a int, b boolean, c text);',
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
dump_cmd => [
'pg_dumpall',
'--format' => 'custom',
@@ -270,9 +283,28 @@ my %pgdumpall_runs = (
'--file' => "$tempdir/format_custom.sql",
"$tempdir/format_custom",
],
- like => qr/^ \n\QCREATE TABLE public.format_custom (/xm
- },);
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ }, );
# First execute the setup_sql
foreach my $run (sort keys %pgdumpall_runs)
@@ -326,6 +358,34 @@ foreach my $run (sort keys %pgdumpall_runs)
}
}
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql", ],
+ qr/\Qpg_restore: error: -C\/--create option should be specified when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom', '--list',
+ '--file' => "$tempdir/error_test.sql", ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring multiple databases by archive of pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq', ],
+ qr/\Qpg_restore: error: could not connect to database "dbpq"\E/,
+ 'When non-exist datbase is given with -d option in pg_restore with dump of pg_dumpall');
+
$node->stop('fast');
done_testing();
--
2.39.3
On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote:
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor<mahi6run@gmail.com> wrote:
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan<andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera<alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland —https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.comThanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
Here, I am attaching an updated delta patch which has some more TAP
tests. Please include these tests also. This patch can be applied on
v20250403_0004* patch.
Thanks. I have pushed these now with a few further small tweaks.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Sat, 5 Apr 2025 at 01:41, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote:
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
Here, I am attaching an updated delta patch which has some more TAP
tests. Please include these tests also. This patch can be applied on
v20250403_0004* patch.Thanks. I have pushed these now with a few further small tweaks.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Thanks Andrew for committing this.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Hi.
Em sex., 4 de abr. de 2025 às 17:11, Andrew Dunstan <andrew@dunslane.net>
escreveu:
On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote:
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> <mahi6run@gmail.com> wrote:
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> <andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
Here, I am attaching an updated delta patch which has some more TAP
tests. Please include these tests also. This patch can be applied on
v20250403_0004* patch.Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.
Coverity has another resource leak alert.
trivial patch attached.
best regards,
Ranier Vilela
Show quoted text
Attachments:
fix_resource_leak_pg_restore.patchapplication/octet-stream; name=fix_resource_leak_pg_restore.patchDownload
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 06c28ab314..eb3109d719 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -905,6 +905,7 @@ read_one_statement(StringInfo inBuf, FILE *pfile)
if (c == '\n')
appendStringInfoChar(inBuf, (char) '\n');
}
+ destroyStringInfo(&q);
/* No input before EOF signal means time to quit. */
if (c == EOF && inBuf->len == 0)
On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.
Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Em qui., 10 de abr. de 2025 às 15:58, Andrew Dunstan <andrew@dunslane.net>
escreveu:
On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.
Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Andew, I think that the commit wasn't very correct.
Now the variable *q* is being destroyed inside the loop.
The patch was destroying the variable *q* (stringinfo),
after the loop while.
best regards,
Ranier Vilela
Show quoted text
On 2025-04-10 Th 5:45 PM, Ranier Vilela wrote:
Em qui., 10 de abr. de 2025 às 15:58, Andrew Dunstan
<andrew@dunslane.net> escreveu:On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.
Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Andew, I think that the commit wasn't very correct.
Now the variable *q* is being destroyed inside the loop.The patch was destroying the variable *q* (stringinfo),
after the loop while.
Yes, you're right. Must be blind. Fixed.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Em qui., 10 de abr. de 2025 20:09, Andrew Dunstan <andrew@dunslane.net>
escreveu:
On 2025-04-10 Th 5:45 PM, Ranier Vilela wrote:
Em qui., 10 de abr. de 2025 às 15:58, Andrew Dunstan <andrew@dunslane.net>
escreveu:On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.
Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Andew, I think that the commit wasn't very correct.
Now the variable *q* is being destroyed inside the loop.The patch was destroying the variable *q* (stringinfo),
after the loop while.Yes, you're right. Must be blind. Fixed.
Thanks Andrew.
best regards,
Ranier Vilela
Hi Andrew.
I just saw the fix commit.
My fault.
I'm sorry.
best regards,
Ranier Vilela
On Sat, 5 Apr 2025 at 01:41, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote:
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hi
FWIW I don't think the on_exit_nicely business is in final shape just
yet. We're doing something super strange and novel about keeping track
of an array index, so that we can modify it later. Or something like
that, I think? That doesn't sound all that nice to me. Elsewhere it
was suggested that we need some way to keep track of the list of things
that need cleanup (a list of connections IIRC?) -- perhaps in a
thread-local variable or a global or something -- and we install the
cleanup function once, and that reads from the variable. The program
can add things to the list, or remove them, at will; and we don't need
to modify the cleanup function in any way.--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback.
I removed the old handling of on_exit_nicely_list from the last patch
set and added one simple function to just update the archive handle in
shutdown_info. (shutdown_info.AHX = AHX;)For first database, we will add entry into on_exit_nicely_list array
and for rest database, we will update only shutdown_info as we already
closed connection for previous database.With this fix, we will not
touch entry of on_exit_nicely_list for each database.Here, I am attaching updated patches.
OK, looks good. here's my latest. I'm currently working on tidying up
docco and comments.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew for the updated patches.
Here, I am attaching a delta patch with some more TAP-test cases.
Here, I am attaching an updated delta patch which has some more TAP
tests. Please include these tests also. This patch can be applied on
v20250403_0004* patch.Thanks. I have pushed these now with a few further small tweaks.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Hi Andrew,
I did some refactoring to find out dump file extensions(.dmp/.tar etc)
in pg_restore. With the attached patch, we will not try to find out
file extension with each database, rather we will find out before the
loop.
Here, I am attaching a patch for the same. Please have a look over this.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v01-pg_restore-refactor-code-of-dump-file-extenion.patchapplication/octet-stream; name=v01-pg_restore-refactor-code-of-dump-file-extenion.patchDownload
From b4721f4c91017297cfbdeaa458291f7a97b023d7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 15 Apr 2025 23:45:44 +0530
Subject: [PATCH] pg_restore: refactor code of dump file extenion
After this refactor, we will find file extension once but earlier we were trying
to get extension for each database in loop.
---
src/bin/pg_dump/pg_restore.c | 82 +++++++++++++++++++++++++++---------
1 file changed, 61 insertions(+), 21 deletions(-)
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index ff4bb320fc9..e6486957620 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -71,6 +71,8 @@ static int get_dbnames_list_to_restore(PGconn *conn,
SimpleStringList db_exclude_patterns);
static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
SimpleOidStringList *dbname_oid_list);
+static
+char *get_dump_file_exten(const char *dumpdirpath, Oid dboid, const ArchiveFormat format);
int
main(int argc, char **argv)
@@ -1109,6 +1111,7 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
bool dumpData = opts->dumpData;
bool dumpSchema = opts->dumpSchema;
bool dumpStatistics = opts->dumpSchema;
+ const char *file_exten;
/* Save db name to reuse it for all the database. */
if (opts->cparams.dbname)
@@ -1163,6 +1166,9 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+ /* Now get dump file extention. */
+ file_exten = get_dump_file_exten(dumpdirpath, dbname_oid_list.head->oid, opts->format);
+
/*
* Till now, we made a list of databases, those needs to be restored after
* skipping names of exclude-database. Now we can launch parallel workers
@@ -1172,8 +1178,6 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
db_cell; db_cell = db_cell->next)
{
char subdirpath[MAXPGPATH];
- char subdirdbpath[MAXPGPATH];
- char dbfilename[MAXPGPATH];
int n_errors;
/* ignore dbs marked for skipping */
@@ -1190,25 +1194,8 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
opts->cparams.override_dbname = NULL;
}
- snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
-
- /*
- * Look for the database dump file/dir. If there is an {oid}.tar or
- * {oid}.dmp file, use it. Otherwise try to use a directory called
- * {oid}
- */
- snprintf(dbfilename, MAXPGPATH, "%u.tar", db_cell->oid);
- if (file_exists_in_directory(subdirdbpath, dbfilename))
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, db_cell->oid);
- else
- {
- snprintf(dbfilename, MAXPGPATH, "%u.dmp", db_cell->oid);
-
- if (file_exists_in_directory(subdirdbpath, dbfilename))
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, db_cell->oid);
- else
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, db_cell->oid);
- }
+ /* Set particular dump file path. */
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u%s", dumpdirpath, db_cell->oid, file_exten);
pg_log_info("restoring database \"%s\"", db_cell->str);
@@ -1384,3 +1371,56 @@ copy_or_print_global_file(const char *outfile, FILE *pfile)
if (strcmp(outfile, "-") != 0)
fclose(OPF);
}
+
+/*
+ * get_dump_file_exten
+ *
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+static
+char *get_dump_file_exten(const char *dumpdirpath, Oid dboid, const ArchiveFormat format)
+{
+ char *file_exten = "";
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
+
+ switch(format)
+ {
+ case archCustom:
+ file_exten = ".dmp";
+ break;
+ case archTar:
+ file_exten = ".tar";
+ break;
+ case archDirectory:
+ file_exten = "";
+ break;
+ case archUnknown: /* based on file exist, try to get file extension name. */
+ snprintf(dbfilename, MAXPGPATH, "%u", dboid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ file_exten = "";
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dboid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ file_exten = ".dmp";
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dboid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ file_exten = ".tar";
+ else
+ file_exten = "";
+ }
+ }
+ break;
+ default:
+ pg_fatal("unrecognized file format \"%d\"", format);
+ }
+
+ return file_exten;
+}
--
2.39.3
On 2025-04-15 Tu 2:30 PM, Mahendra Singh Thalor wrote:
Hi Andrew,
I did some refactoring to find out dump file extensions(.dmp/.tar etc)
in pg_restore. With the attached patch, we will not try to find out
file extension with each database, rather we will find out before the
loop.Here, I am attaching a patch for the same. Please have a look over this.
That doesn't look right at first glance. You shouldn't have to tell
pg_restore what format to use, it should be able to intuit it from the
dumps (and that's what the docs say it does).
The saving here would be hardly measurable anyway - you would be in
effect saving one or two stat calls per database.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
Thanks. I have pushed these now with a few further small tweaks.
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dump
That didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databases
This happens in dropDBs(). I found that by searching pg_dumpall.c for "OPF",
which finds all the content we can write to globals.dat.
commit 1495eff wrote:
--- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);- ret = runPgDump(dbname, create_opts); + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); if (ret != 0) pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);if (filename) { - OPF = fopen(filename, PG_BINARY_A); + char global_path[MAXPGPATH]; + + if (archDumpFormat != archNull) + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); + else + snprintf(global_path, MAXPGPATH, "%s", filename); + + OPF = fopen(global_path, PG_BINARY_A); if (!OPF) pg_fatal("could not re-open the output file \"%s\": %m", - filename); + global_path);
Minor item: plain mode benefits from reopening, because pg_dump appended to
the plain output file. There's no analogous need to reopen global.dat, since
just this one process writes to global.dat.
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, - pgdumpopts->data, create_opts); - /* - * If we have a filename, use the undocumented plain-append pg_dump - * format. + * If this is not a plain format dump, then append file name and dump + * format to the pg_dump command to get archive dump. */ - if (filename) - appendPQExpBufferStr(&cmd, " -Fa "); + if (archDumpFormat != archNull) + { + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, + dbfile, create_opts); + + if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " --format=directory "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " --format=custom "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " --format=tar "); + } else - appendPQExpBufferStr(&cmd, " -Fp "); + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + pgdumpopts->data, create_opts);
This uses pgdumpopts for plain mode only, so many pg_dumpall options silently
have no effect in non-plain mode. Example:
strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec
strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c
+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.
What makes it okay to use this particular subset of SQL lexing?
+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");
When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.
+ /* If database is already created, then don't set createDB flag. */ + if (opts->cparams.dbname) + { + PGconn *test_conn; + + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, + opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, + false, progname, NULL, NULL, NULL, NULL); + if (test_conn) + { + PQfinish(test_conn); + + /* Use already created database for connection. */ + opts->createDB = 0; + opts->cparams.dbname = db_cell->str; + } + else + { + /* we'll have to create it */ + opts->createDB = 1; + opts->cparams.dbname = connected_db; + }
In released versions, "pg_restore --create" fails if the database exists, and
pg_restore w/o --create fails unless the database exists. I think we should
continue that pattern in this new feature. If not, pg_restore should document
how it treats pg_dumpall-sourced dumps with the "create if not exists"
semantics appearing here.
Thanks Noah for the comments.
On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote:
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
Thanks. I have pushed these now with a few further small tweaks.
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dumpThat didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databases
Databases are global objects so due to --clean command, we are putting
drop commands in global.dat for all the databases. While restoring, we
used the "--globals-only" option so we are dropping all these
databases by global.dat file.
Please let us know your expectations for this specific case.
This happens in dropDBs(). I found that by searching pg_dumpall.c for "OPF",
which finds all the content we can write to globals.dat.commit 1495eff wrote:
--- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.
Yes, we can use appendShellString also. We are using snprintf in the
pg_dump.c file also.
Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u",
loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]);
If we want to use appendShellString, I can write a patch for these.
Please let me know your opinion.
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);- ret = runPgDump(dbname, create_opts); + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); if (ret != 0) pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);if (filename) { - OPF = fopen(filename, PG_BINARY_A); + char global_path[MAXPGPATH]; + + if (archDumpFormat != archNull) + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); + else + snprintf(global_path, MAXPGPATH, "%s", filename); + + OPF = fopen(global_path, PG_BINARY_A); if (!OPF) pg_fatal("could not re-open the output file \"%s\": %m", - filename); + global_path);Minor item: plain mode benefits from reopening, because pg_dump appended to
the plain output file. There's no analogous need to reopen global.dat, since
just this one process writes to global.dat.
yes, only once we need to open global.dat file but to keep simple
code, we kept old code.
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, - pgdumpopts->data, create_opts); - /* - * If we have a filename, use the undocumented plain-append pg_dump - * format. + * If this is not a plain format dump, then append file name and dump + * format to the pg_dump command to get archive dump. */ - if (filename) - appendPQExpBufferStr(&cmd, " -Fa "); + if (archDumpFormat != archNull) + { + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, + dbfile, create_opts); + + if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " --format=directory "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " --format=custom "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " --format=tar "); + } else - appendPQExpBufferStr(&cmd, " -Fp "); + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + pgdumpopts->data, create_opts);This uses pgdumpopts for plain mode only, so many pg_dumpall options silently
have no effect in non-plain mode. Example:strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec
strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec
Agreed. We can add pgdumpopts->data to all the dump formats.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.
Yes, we can document this behavior.
+ /* If database is already created, then don't set createDB flag. */ + if (opts->cparams.dbname) + { + PGconn *test_conn; + + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, + opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, + false, progname, NULL, NULL, NULL, NULL); + if (test_conn) + { + PQfinish(test_conn); + + /* Use already created database for connection. */ + opts->createDB = 0; + opts->cparams.dbname = db_cell->str; + } + else + { + /* we'll have to create it */ + opts->createDB = 1; + opts->cparams.dbname = connected_db; + }In released versions, "pg_restore --create" fails if the database exists, and
pg_restore w/o --create fails unless the database exists. I think we should
continue that pattern in this new feature. If not, pg_restore should document
how it treats pg_dumpall-sourced dumps with the "create if not exists"
semantics appearing here.
Yes, we can document this behavior also.
I am working on all these review comments and I will post a patch in
the coming days.
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote:
On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote:
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
Thanks. I have pushed these now with a few further small tweaks.
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dumpThat didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databasesDatabases are global objects so due to --clean command, we are putting
drop commands in global.dat for all the databases. While restoring, we
used the "--globals-only" option so we are dropping all these
databases by global.dat file.Please let us know your expectations for this specific case.
Be consistent with "pg_dump". A quick check suggests "pg_dump --clean"
affects plain format only. For non-plain formats, only the pg_restore
argument governs the final commands:
$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP
$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP
DROP TABLE public.example;
That said, you should audit code referencing the --clean flag and see if
there's more to it than that quick test suggests. Note that aligning with
pg_dump will require changes for object types beyond databases. "pg_restore
--clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as
appropriate, regardless of whether the original pg_dumpall had --clean.
For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I
expect the same outcome as plain-format "pg_dumpall --globals-only", which is
no databases dropped or created. The help line says "no databases". Plain
"pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do
not drop or create databases.
commit 1495eff wrote:
--- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.Yes, we can use appendShellString also. We are using snprintf in the
pg_dump.c file also.
Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u",
loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]);
It's true snprintf() is not banned in these programs, but don't use it to do
the quoting for OS shell command lines or fragments thereof. dbfilepath is a
fragment of an OS shell command line. The LARGE OBJECTS string is not one of
those. Hence, the LARGE OBJECTS scenario should keep using snprintf().
If we want to use appendShellString, I can write a patch for these.
Please let me know your opinion.
Use appendShellString() for shell quoting. Don't attempt to use it for other
purposes.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().
Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.
+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.Yes, we can document this behavior.
My review asked a question there. I don't see an answer to that question.
Would you answer that question?
Thanks Noah for the feedback.
On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote:
On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote:
On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote:
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
Thanks. I have pushed these now with a few further small tweaks.
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dumpThat didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databasesDatabases are global objects so due to --clean command, we are putting
drop commands in global.dat for all the databases. While restoring, we
used the "--globals-only" option so we are dropping all these
databases by global.dat file.Please let us know your expectations for this specific case.
Be consistent with "pg_dump". A quick check suggests "pg_dump --clean"
affects plain format only. For non-plain formats, only the pg_restore
argument governs the final commands:$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP
$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP
DROP TABLE public.example;That said, you should audit code referencing the --clean flag and see if
there's more to it than that quick test suggests. Note that aligning with
pg_dump will require changes for object types beyond databases. "pg_restore
--clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as
appropriate, regardless of whether the original pg_dumpall had --clean.For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I
expect the same outcome as plain-format "pg_dumpall --globals-only", which is
no databases dropped or created. The help line says "no databases". Plain
"pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do
not drop or create databases.
To pg_restore, we are giving a dump of pg_dumpall which has a
global.dat file and we have drop commands in the global.dat file so
when we are using 'globals-only', we are dropping databases as we have
DROP commands.
As of now, we don't have any filter for global.dat file in restore. If
a user wants to restore only globals(without droping db), then they
should use 'globals-only' in pg_dumpall.
Or if we don't want to DROP databases by global.dat file, then we
should add a filter in pg_restore (hard to implement as we have SQL
commands in global.dat file). I think, for this case, we can do some
more doc changes.
Example: pg_restore --globals-only : this will restore the global.dat
file(including all drop commands). It might drop databases if any drop
commands.
@Andrew Dunstan Please add your opinion.
commit 1495eff wrote:
--- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.Yes, we can use appendShellString also. We are using snprintf in the
pg_dump.c file also.
Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u",
loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]);It's true snprintf() is not banned in these programs, but don't use it to do
the quoting for OS shell command lines or fragments thereof. dbfilepath is a
fragment of an OS shell command line. The LARGE OBJECTS string is not one of
those. Hence, the LARGE OBJECTS scenario should keep using snprintf().If we want to use appendShellString, I can write a patch for these.
Please let me know your opinion.Use appendShellString() for shell quoting. Don't attempt to use it for other
purposes.
Okay. Fixed in attached patch.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.
Yes, we copied this from InteractiveBackend to read statements from
global.dat file.
+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.Yes, we can document this behavior.
My review asked a question there. I don't see an answer to that question.
Would you answer that question?
Example: if there is no active database, even postgres/template1, then
we will consider PATTEREN as NAME. This is the rare case.
In attached patch, I added one doc line also for this case.
@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.
Fixed.
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);- ret = runPgDump(dbname, create_opts); + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); if (ret != 0) pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);if (filename) { - OPF = fopen(filename, PG_BINARY_A); + char global_path[MAXPGPATH]; + + if (archDumpFormat != archNull) + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); + else + snprintf(global_path, MAXPGPATH, "%s", filename); + + OPF = fopen(global_path, PG_BINARY_A); if (!OPF) pg_fatal("could not re-open the output file \"%s\": %m", - filename); + global_path);Minor item: plain mode benefits from reopening, because pg_dump appended to
the plain output file. There's no analogous need to reopen global.dat, since
just this one process writes to global.dat.
Fixed.
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, - pgdumpopts->data, create_opts); - /* - * If we have a filename, use the undocumented plain-append pg_dump - * format. + * If this is not a plain format dump, then append file name and dump + * format to the pg_dump command to get archive dump. */ - if (filename) - appendPQExpBufferStr(&cmd, " -Fa "); + if (archDumpFormat != archNull) + { + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, + dbfile, create_opts); + + if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " --format=directory "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " --format=custom "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " --format=tar "); + } else - appendPQExpBufferStr(&cmd, " -Fp "); + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + pgdumpopts->data, create_opts);This uses pgdumpopts for plain mode only, so many pg_dumpall options silently
have no effect in non-plain mode. Example:strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec
strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec
Fixed.
+ /* If database is already created, then don't set createDB flag. */ + if (opts->cparams.dbname) + { + PGconn *test_conn; + + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, + opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, + false, progname, NULL, NULL, NULL, NULL); + if (test_conn) + { + PQfinish(test_conn); + + /* Use already created database for connection. */ + opts->createDB = 0; + opts->cparams.dbname = db_cell->str; + } + else + { + /* we'll have to create it */ + opts->createDB = 1; + opts->cparams.dbname = connected_db; + }In released versions, "pg_restore --create" fails if the database exists, and
pg_restore w/o --create fails unless the database exists. I think we should
continue that pattern in this new feature. If not, pg_restore should document
how it treats pg_dumpall-sourced dumps with the "create if not exists"
semantics appearing here.
Added one more doc line for this case.
Here, I am attaching a patch. Please let me know feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v01_db2978-WIP-implement-setNodeValue-function.patchapplication/octet-stream; name=v01_db2978-WIP-implement-setNodeValue-function.patchDownload
From ff9b28250dc0c016b37477536ece0b14b6d07c03 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Wed, 9 Jul 2025 11:42:47 +0530
Subject: [PATCH] implement setNodeValue function
DB-2978
---
contrib/dbms_xmldom/dbms_xmldom.c | 58 ++++++++++++++++++-
contrib/dbms_xmldom/dbms_xmldom.sql.in | 13 +++++
contrib/dbms_xmldom/dbms_xmldom_public.sql.in | 1 +
3 files changed, 71 insertions(+), 1 deletion(-)
diff --git a/contrib/dbms_xmldom/dbms_xmldom.c b/contrib/dbms_xmldom/dbms_xmldom.c
index ed49245b0cb..a297ddca6d3 100644
--- a/contrib/dbms_xmldom/dbms_xmldom.c
+++ b/contrib/dbms_xmldom/dbms_xmldom.c
@@ -42,6 +42,7 @@ PG_FUNCTION_INFO_V1(dbms_xmldom_free_document);
PG_FUNCTION_INFO_V1(dbms_xmldom_set_version);
PG_FUNCTION_INFO_V1(dbms_xmldom_get_nodename);
PG_FUNCTION_INFO_V1(dbms_xmldom_get_nodevalue);
+PG_FUNCTION_INFO_V1(dbms_xmldom_set_nodevalue);
PG_FUNCTION_INFO_V1(dbms_xmldom_get_firstchild);
PG_FUNCTION_INFO_V1(dbms_xmldom_get_childnodes);
PG_FUNCTION_INFO_V1(dbms_xmldom_get_nodelistlength);
@@ -49,7 +50,6 @@ PG_FUNCTION_INFO_V1(dbms_xmldom_get_nodelistitem);
PG_FUNCTION_INFO_V1(dbms_xmldom_make_element);
PG_FUNCTION_INFO_V1(dbms_xmldom_replace_child);
PG_FUNCTION_INFO_V1(dbms_xmldom_remove_child);
-PG_FUNCTION_INFO_V1(dbms_xmldom_set_node_value);
/* Function Declarations */
@@ -1287,6 +1287,62 @@ dbms_xmldom_get_nodevalue(PG_FUNCTION_ARGS)
PG_RETURN_VARCHAR_P(cstring_to_text((char *) nodevalue));
}
+/*
+ * dbms_xmldom_set_nodevalue
+ *
+ * Implements the functionality of dbms_xmldom.setNodeValue.
+ * sets the value to DOMNode
+ */
+Datum
+dbms_xmldom_set_nodevalue(PG_FUNCTION_ARGS)
+{
+ char docHashKey[HASHKEYLEN];
+ uint32 docid = 0;
+ uint64 nodeid = 0;
+ DocInfoPtr docInfo = NULL;
+ DocNodeInfoPtr nodeInfo = NULL;
+ char *nodevalue = NULL;
+ char *nodevaluenull = NULL;
+
+ /* If node is NULL, then return. */
+ if (PG_ARGISNULL(0))
+ PG_RETURN_NULL();
+
+ if (PG_ARGISNULL(1) || strlen(text_to_cstring(PG_GETARG_TEXT_P(1))) == 0)
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("invalid value for parameter \"%s\"", "tagName")));
+
+ nodevalue = text_to_cstring(PG_GETARG_TEXT_P(1));
+
+ memcpy(docHashKey, VARDATA_ANY(PG_GETARG_TEXT_P(0)), HASHKEYLEN);
+ extract_ids(docHashKey, &docid, &nodeid);
+
+ docInfo = getDocInfo(docid);
+ if (!docInfo)
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ err_redwood_sqlcode(-31181),
+ errmsg("invalid value for parameter \"%s\"", "n")));
+
+ nodeInfo =
+ (DocNodeInfoPtr) hash_search(docInfo->nodeInfo,
+ &nodeid,
+ HASH_FIND,
+ NULL);
+
+ Assert(nodeInfo && nodeInfo->node);
+
+ /* Now set new vaule. */
+// xmlNodeSetContent((xmlNodePtr) nodeInfo->node, (const xmlChar *) nodevaluenull);
+// xmlNodeAddContent((xmlNodePtr) nodeInfo->node, (const xmlChar *) nodevalue);
+
+ xmlNodeSetName((xmlNodePtr) nodeInfo->node, (const xmlChar *) nodevalue);
+// xmlNodeSetContent((xmlNodePtr) nodeInfo->node, (const xmlChar *) nodevalue);
+
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(docHashKey, HASHKEYLEN));
+}
+
/*
* dbms_xmldom_get_firstchild
*
diff --git a/contrib/dbms_xmldom/dbms_xmldom.sql.in b/contrib/dbms_xmldom/dbms_xmldom.sql.in
index 2236612d754..2eae94ec656 100644
--- a/contrib/dbms_xmldom/dbms_xmldom.sql.in
+++ b/contrib/dbms_xmldom/dbms_xmldom.sql.in
@@ -65,6 +65,10 @@ CREATE FUNCTION dbms_xmldom_getNodeValue(nodeid RAW) RETURNS VARCHAR2
AS '$libdir/dbms_xmldom', 'dbms_xmldom_get_nodevalue'
LANGUAGE C IMMUTABLE PARALLEL SAFE;
+CREATE FUNCTION dbms_xmldom_setNodeValue(nodeid RAW, name IN VARCHAR2) RETURNS RAW
+AS '$libdir/dbms_xmldom', 'dbms_xmldom_set_nodevalue'
+LANGUAGE C IMMUTABLE PARALLEL SAFE;
+
CREATE FUNCTION dbms_xmldom_getFirstChild(nodeid RAW) RETURNS RAW
AS '$libdir/dbms_xmldom', 'dbms_xmldom_get_firstchild'
LANGUAGE C IMMUTABLE PARALLEL SAFE;
@@ -145,6 +149,14 @@ CREATE OR REPLACE PACKAGE BODY dbms_xmldom IS
return dbms_xmldom_getNodeValue(n.id);
END;
+ FUNCTION setNodeValue(n domnode, name IN VARCHAR2) RETURN DOMNode SET search_path = pg_catalog, pg_temp IS
+ DECLARE
+ node DOMNode;
+ BEGIN
+ n.id = dbms_xmldom_setNodeValue(n.id, name);
+ return node;
+ END;
+
FUNCTION getFirstChild(n DOMNode) RETURN DOMNode SET search_path = pg_catalog, pg_temp IS
DECLARE
node DOMNode;
@@ -199,6 +211,7 @@ CREATE OR REPLACE PACKAGE BODY dbms_xmldom IS
return node;
END;
+
FUNCTION createElement(doc DOMDocument, tagName IN VARCHAR2) RETURN DOMElement SET search_path = pg_catalog, pg_temp IS
DECLARE
node DOMElement;
diff --git a/contrib/dbms_xmldom/dbms_xmldom_public.sql.in b/contrib/dbms_xmldom/dbms_xmldom_public.sql.in
index b7b1686e1fe..8fd2463394d 100644
--- a/contrib/dbms_xmldom/dbms_xmldom_public.sql.in
+++ b/contrib/dbms_xmldom/dbms_xmldom_public.sql.in
@@ -32,6 +32,7 @@ CREATE OR REPLACE PACKAGE dbms_xmldom AUTHID CURRENT_USER AS
FUNCTION getNodeName(n DOMNode) RETURN VARCHAR2;
FUNCTION getNodeValue(n domnode) RETURN VARCHAR2;
+ FUNCTION setNodeValue(n domnode, name IN VARCHAR2) RETURN DOMNode;
FUNCTION getFirstChild(n DOMNode) RETURN DOMNode;
FUNCTION getChildNodes(n DOMNode) RETURN DOMNodeList;
FUNCTION appendChild(n DOMNode, newChild IN DOMNode) RETURN DOMNode;
--
2.39.3
Attaching the correct patch.
Sorry, I attached the wrong patch in my last email.
On Thu, 17 Jul 2025 at 15:46, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Noah for the feedback.
On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote:
On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote:
On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote:
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
Thanks. I have pushed these now with a few further small tweaks.
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dumpThat didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databasesDatabases are global objects so due to --clean command, we are putting
drop commands in global.dat for all the databases. While restoring, we
used the "--globals-only" option so we are dropping all these
databases by global.dat file.Please let us know your expectations for this specific case.
Be consistent with "pg_dump". A quick check suggests "pg_dump --clean"
affects plain format only. For non-plain formats, only the pg_restore
argument governs the final commands:$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP
$ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP
DROP TABLE public.example;That said, you should audit code referencing the --clean flag and see if
there's more to it than that quick test suggests. Note that aligning with
pg_dump will require changes for object types beyond databases. "pg_restore
--clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as
appropriate, regardless of whether the original pg_dumpall had --clean.For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I
expect the same outcome as plain-format "pg_dumpall --globals-only", which is
no databases dropped or created. The help line says "no databases". Plain
"pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do
not drop or create databases.To pg_restore, we are giving a dump of pg_dumpall which has a
global.dat file and we have drop commands in the global.dat file so
when we are using 'globals-only', we are dropping databases as we have
DROP commands.
As of now, we don't have any filter for global.dat file in restore. If
a user wants to restore only globals(without droping db), then they
should use 'globals-only' in pg_dumpall.
Or if we don't want to DROP databases by global.dat file, then we
should add a filter in pg_restore (hard to implement as we have SQL
commands in global.dat file). I think, for this case, we can do some
more doc changes.
Example: pg_restore --globals-only : this will restore the global.dat
file(including all drop commands). It might drop databases if any drop
commands.
@Andrew Dunstan Please add your opinion.commit 1495eff wrote:
--- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.Yes, we can use appendShellString also. We are using snprintf in the
pg_dump.c file also.
Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u",
loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]);It's true snprintf() is not banned in these programs, but don't use it to do
the quoting for OS shell command lines or fragments thereof. dbfilepath is a
fragment of an OS shell command line. The LARGE OBJECTS string is not one of
those. Hence, the LARGE OBJECTS scenario should keep using snprintf().If we want to use appendShellString, I can write a patch for these.
Please let me know your opinion.Use appendShellString() for shell quoting. Don't attempt to use it for other
purposes.Okay. Fixed in attached patch.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.Yes, we copied this from InteractiveBackend to read statements from
global.dat file.+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.Yes, we can document this behavior.
My review asked a question there. I don't see an answer to that question.
Would you answer that question?Example: if there is no active database, even postgres/template1, then
we will consider PATTEREN as NAME. This is the rare case.
In attached patch, I added one doc line also for this case.@@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn)
continue;
}+ /* + * If this is not a plain format dump, then append dboid and dbname to + * the map.dat file. + */ + if (archDumpFormat != archNull) + { + if (archDumpFormat == archCustom) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); + else if (archDumpFormat == archTar) + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); + else + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);Use appendShellString() instead. Plain mode already does that for the
"pg_dumpall -f" argument, which is part of db_subdir here. We don't want
weird filename characters to work out differently for plain vs. non-plain
mode. Also, it's easier to search for appendShellString() than to search for
open-coded shell quoting.Fixed.
@@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn)
if (filename)
fclose(OPF);- ret = runPgDump(dbname, create_opts); + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); if (ret != 0) pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);if (filename) { - OPF = fopen(filename, PG_BINARY_A); + char global_path[MAXPGPATH]; + + if (archDumpFormat != archNull) + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); + else + snprintf(global_path, MAXPGPATH, "%s", filename); + + OPF = fopen(global_path, PG_BINARY_A); if (!OPF) pg_fatal("could not re-open the output file \"%s\": %m", - filename); + global_path);Minor item: plain mode benefits from reopening, because pg_dump appended to
the plain output file. There's no analogous need to reopen global.dat, since
just this one process writes to global.dat.Fixed.
@@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, - pgdumpopts->data, create_opts); - /* - * If we have a filename, use the undocumented plain-append pg_dump - * format. + * If this is not a plain format dump, then append file name and dump + * format to the pg_dump command to get archive dump. */ - if (filename) - appendPQExpBufferStr(&cmd, " -Fa "); + if (archDumpFormat != archNull) + { + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, + dbfile, create_opts); + + if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " --format=directory "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " --format=custom "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " --format=tar "); + } else - appendPQExpBufferStr(&cmd, " -Fp "); + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + pgdumpopts->data, create_opts);This uses pgdumpopts for plain mode only, so many pg_dumpall options silently
have no effect in non-plain mode. Example:strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec
strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep execFixed.
+ /* If database is already created, then don't set createDB flag. */ + if (opts->cparams.dbname) + { + PGconn *test_conn; + + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, + opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, + false, progname, NULL, NULL, NULL, NULL); + if (test_conn) + { + PQfinish(test_conn); + + /* Use already created database for connection. */ + opts->createDB = 0; + opts->cparams.dbname = db_cell->str; + } + else + { + /* we'll have to create it */ + opts->createDB = 1; + opts->cparams.dbname = connected_db; + }In released versions, "pg_restore --create" fails if the database exists, and
pg_restore w/o --create fails unless the database exists. I think we should
continue that pattern in this new feature. If not, pg_restore should document
how it treats pg_dumpall-sourced dumps with the "create if not exists"
semantics appearing here.Added one more doc line for this case.
Here, I am attaching a patch. Please let me know feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v01-17-july-use-appendShellString-to-append-file-names.nociapplication/octet-stream; name=v01-17-july-use-appendShellString-to-append-file-names.nociDownload
From 6ff7c873d4842c47da22666d7efaba7551fb110b Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 17 Jul 2025 15:24:25 +0530
Subject: [PATCH] use appendShellString to append file names
---
doc/src/sgml/ref/pg_restore.sgml | 6 ++-
src/bin/pg_dump/pg_dumpall.c | 67 ++++++++++++++++++--------------
src/bin/pg_dump/pg_restore.c | 28 +++++++------
3 files changed, 60 insertions(+), 41 deletions(-)
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b649bd3a5ae..f4eb31f2324 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -150,7 +150,9 @@ PostgreSQL documentation
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
<option>--create</option> is required when restoring multiple databases
- from an archive created by <application>pg_dumpall</application>.
+ from an archive created by <application>pg_dumpall</application> and if
+ database is already created, then this will restore database without any
+ error.
</para>
<para>
@@ -621,6 +623,8 @@ PostgreSQL documentation
</para>
<para>
This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ If there is no database connection exist, then <replaceable class="parameter">pattern</replaceable> will be considered
+ as <replaceable class="parameter">name</replaceable> only.
</para>
</listitem>
</varlistentry>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 3cbcad65c5f..746c1f073c2 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -1622,8 +1622,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
- char db_subdir[MAXPGPATH];
- char dbfilepath[MAXPGPATH];
+ PQExpBufferData db_subdir;
+ PQExpBufferData dbfilepath;
FILE *map_file = NULL;
/*
@@ -1653,20 +1653,28 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
*/
if (archDumpFormat != archNull)
{
- char map_file_path[MAXPGPATH];
+ PQExpBufferData map_file_path;
- snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ initPQExpBuffer(&db_subdir);
+ initPQExpBuffer(&dbfilepath);
+ initPQExpBuffer(&map_file_path);
+
+ appendShellString(&db_subdir, filename);
+ appendPQExpBufferChar(&db_subdir, '/');
+ appendPQExpBufferStr(&db_subdir, "databases");
/* Create a subdirectory with 'databases' name under main directory. */
- if (mkdir(db_subdir, pg_dir_create_mode) != 0)
- pg_fatal("could not create directory \"%s\": %m", db_subdir);
+ if (mkdir(db_subdir.data, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir.data);
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ appendShellString(&map_file_path, filename);
+ appendPQExpBufferChar(&map_file_path, '/');
+ appendPQExpBufferStr(&map_file_path, "map.dat");
/* Create a map file (to store dboid and dbname) */
- map_file = fopen(map_file_path, PG_BINARY_W);
+ map_file = fopen(map_file_path.data, PG_BINARY_W);
if (!map_file)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
+ pg_fatal("could not open file \"%s\": %m", map_file_path.data);
}
for (i = 0; i < PQntuples(res); i++)
@@ -1693,12 +1701,16 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
*/
if (archDumpFormat != archNull)
{
+ resetPQExpBuffer(&dbfilepath);
+
+ appendShellString(&dbfilepath, db_subdir.data);
+ appendPQExpBufferChar(&dbfilepath, '/');
+ appendShellString(&dbfilepath, oid);
+
if (archDumpFormat == archCustom)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ appendPQExpBufferStr(&dbfilepath, ".dmp");
else if (archDumpFormat == archTar)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
- else
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+ appendPQExpBufferStr(&dbfilepath, ".tar");
/* Put one line entry for dboid and dbname in map file. */
fprintf(map_file, "%s %s\n", oid, dbname);
@@ -1731,23 +1743,20 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
+ ret = runPgDump(dbname, create_opts, dbfilepath.data, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ /*
+ * For non-plain mode, no need to re-open file as only once we write
+ * data into file.
+ */
+ if (filename && archDumpFormat == archNull)
{
- char global_path[MAXPGPATH];
-
- if (archDumpFormat != archNull)
- snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
- else
- snprintf(global_path, MAXPGPATH, "%s", filename);
-
- OPF = fopen(global_path, PG_BINARY_A);
+ OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- global_path);
+ filename);
}
}
@@ -1774,14 +1783,17 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s ", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
/*
* If this is not a plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
{
- printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
- dbfile, create_opts);
+
+ printfPQExpBuffer(&cmd, " -f %s ", dbfile);
if (archDumpFormat == archDirectory)
appendPQExpBufferStr(&cmd, " --format=directory ");
@@ -1792,9 +1804,6 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
}
else
{
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
* If we have a filename, use the undocumented plain-append pg_dump
* format.
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 6ef789cb06d..4bb10971884 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -1038,7 +1038,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
{
StringInfoData linebuf;
FILE *pfile;
- char map_file_path[MAXPGPATH];
+ PQExpBufferData map_file_path;
int count = 0;
@@ -1052,13 +1052,16 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
return 0;
}
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+ initPQExpBuffer(&map_file_path);
+ appendShellString(&map_file_path, dumpdirpath);
+ appendPQExpBufferChar(&map_file_path, '/');
+ appendPQExpBufferStr(&map_file_path, "map.dat");
/* Open map.dat file. */
- pfile = fopen(map_file_path, PG_BINARY_R);
+ pfile = fopen(map_file_path.data, PG_BINARY_R);
if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
+ pg_fatal("could not open file \"%s\": %m", map_file_path.data);
initStringInfo(&linebuf);
@@ -1086,11 +1089,11 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
/* Report error and exit if the file has any corrupted data. */
if (!OidIsValid(db_oid) || namelen <= 1)
- pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path.data,
count + 1);
pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
- dbname, db_oid, map_file_path);
+ dbname, db_oid, map_file_path.data);
dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
dbidname->oid = db_oid;
@@ -1306,20 +1309,23 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
static int
process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
{
- char global_file_path[MAXPGPATH];
+ PQExpBufferData global_file_path;
PGresult *result;
StringInfoData sqlstatement,
user_create;
FILE *pfile;
int n_errors = 0;
- snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+ initPQExpBuffer(&global_file_path);
+ appendShellString(&global_file_path, dumpdirpath);
+ appendPQExpBufferChar(&global_file_path, '/');
+ appendPQExpBufferStr(&global_file_path, "global.dat");
/* Open global.dat file. */
- pfile = fopen(global_file_path, PG_BINARY_R);
+ pfile = fopen(global_file_path.data, PG_BINARY_R);
if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", global_file_path);
+ pg_fatal("could not open file \"%s\": %m", global_file_path.data);
/*
* If outfile is given, then just copy all global.dat file data into
@@ -1369,7 +1375,7 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
if (n_errors)
pg_log_warning(ngettext("ignored %d error in file \"%s\"",
"ignored %d errors in file \"%s\"", n_errors),
- n_errors, global_file_path);
+ n_errors, global_file_path.data);
fclose(pfile);
return n_errors;
--
2.39.3
On 2025-Jul-17, Mahendra Singh Thalor wrote:
To pg_restore, we are giving a dump of pg_dumpall which has a
global.dat file and we have drop commands in the global.dat file so
when we are using 'globals-only', we are dropping databases as we have
DROP commands.
As of now, we don't have any filter for global.dat file in restore. If
a user wants to restore only globals(without droping db), then they
should use 'globals-only' in pg_dumpall.
Or if we don't want to DROP databases by global.dat file, then we
should add a filter in pg_restore (hard to implement as we have SQL
commands in global.dat file).
I think dropping database is dangerous and makes no practical sense;
doing it renders pg_dumpall --clean completely unusable. You're arguing
from the point of view of ease of implementation, but that doesn't help
users.
I think, for this case, we can do some
more doc changes.
Example: pg_restore --globals-only : this will restore the global.dat
file(including all drop commands). It might drop databases if any drop
commands.
I don't think doc changes are useful.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"I love the Postgres community. It's all about doing things _properly_. :-)"
(David Garamond)
Thanks Álvaro for the feedback.
On Thu, 17 Jul 2025 at 16:41, Álvaro Herrera <alvherre@alvh.no-ip.org>
wrote:
On 2025-Jul-17, Mahendra Singh Thalor wrote:
To pg_restore, we are giving a dump of pg_dumpall which has a
global.dat file and we have drop commands in the global.dat file so
when we are using 'globals-only', we are dropping databases as we have
DROP commands.
As of now, we don't have any filter for global.dat file in restore. If
a user wants to restore only globals(without droping db), then they
should use 'globals-only' in pg_dumpall.
Or if we don't want to DROP databases by global.dat file, then we
should add a filter in pg_restore (hard to implement as we have SQL
commands in global.dat file).I think dropping database is dangerous and makes no practical sense;
doing it renders pg_dumpall --clean completely unusable. You're arguing
from the point of view of ease of implementation, but that doesn't help
users.
I have 2 more solutions for this case.
*Solution1*: dump DROP database/role/tablespace commands in global_drop.dat
(or dump only DROP DATABASE commands in global_drop.dat file) and skip
restoring this file with globals-only.
*Solution2*: add one more filter in restore to skip the "DROP DATABASE"
command as we already have one filter for "CREATE USER".
Based on *solution1*, I made a WIP patch. Here, I am attaching a patch for
feedback.
Note: please use this v02 patch for review.
I think, for this case, we can do some
more doc changes.
Example: pg_restore --globals-only : this will restore the global.dat
file(including all drop commands). It might drop databases if any drop
commands.I don't think doc changes are useful.
--
Álvaro Herrera 48°01'N 7°57'E —
"I love the Postgres community. It's all about doing things _properly_.
:-)"
(David Garamond)
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v02-17-july-use-appendShellString-to-append-file-names.nociapplication/octet-stream; name=v02-17-july-use-appendShellString-to-append-file-names.nociDownload
From c11535220dbcbbb1e7c3b59c63e66002e5cfa629 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 17 Jul 2025 18:16:47 +0530
Subject: [PATCH] use appendShellString to append file names
create global_drop.dat file for database/role/tablespace
note: we should keep only DROP DATABASE but for testing, i kept all 3.
---
doc/src/sgml/ref/pg_restore.sgml | 6 +-
src/bin/pg_dump/pg_dumpall.c | 94 ++++++++++++++++--------
src/bin/pg_dump/pg_restore.c | 122 +++++++++++++++++++++----------
3 files changed, 154 insertions(+), 68 deletions(-)
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b649bd3a5ae..f4eb31f2324 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -150,7 +150,9 @@ PostgreSQL documentation
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
<option>--create</option> is required when restoring multiple databases
- from an archive created by <application>pg_dumpall</application>.
+ from an archive created by <application>pg_dumpall</application> and if
+ database is already created, then this will restore database without any
+ error.
</para>
<para>
@@ -621,6 +623,8 @@ PostgreSQL documentation
</para>
<para>
This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ If there is no database connection exist, then <replaceable class="parameter">pattern</replaceable> will be considered
+ as <replaceable class="parameter">name</replaceable> only.
</para>
</listitem>
</varlistentry>
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 100317b1aa9..e947255c52c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -642,6 +642,25 @@ main(int argc, char *argv[])
*/
if (output_clean)
{
+ FILE *drop_OPF = NULL;
+ FILE *old_OPF = OPF;
+
+ if (archDumpFormat != archNull)
+ {
+ char global_drop_path[MAXPGPATH];
+
+ snprintf(global_drop_path, MAXPGPATH, "%s/global_drop.dat", filename);
+
+ drop_OPF = fopen(global_drop_path, PG_BINARY_W);
+
+ if (!drop_OPF)
+ pg_fatal("could not open file \"%s\": %m", global_drop_path);
+
+ }
+
+ if (drop_OPF)
+ OPF = drop_OPF;
+
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -650,6 +669,12 @@ main(int argc, char *argv[])
if (!tablespaces_only)
dropRoles(conn);
+
+ if (drop_OPF)
+ {
+ fclose(drop_OPF);
+ OPF = old_OPF;
+ }
}
/*
@@ -1622,8 +1647,8 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
- char db_subdir[MAXPGPATH];
- char dbfilepath[MAXPGPATH];
+ PQExpBufferData db_subdir;
+ PQExpBufferData dbfilepath;
FILE *map_file = NULL;
/*
@@ -1653,20 +1678,28 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
*/
if (archDumpFormat != archNull)
{
- char map_file_path[MAXPGPATH];
+ PQExpBufferData map_file_path;
- snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ initPQExpBuffer(&db_subdir);
+ initPQExpBuffer(&dbfilepath);
+ initPQExpBuffer(&map_file_path);
+
+ appendShellString(&db_subdir, filename);
+ appendPQExpBufferChar(&db_subdir, '/');
+ appendPQExpBufferStr(&db_subdir, "databases");
/* Create a subdirectory with 'databases' name under main directory. */
- if (mkdir(db_subdir, pg_dir_create_mode) != 0)
- pg_fatal("could not create directory \"%s\": %m", db_subdir);
+ if (mkdir(db_subdir.data, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir.data);
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ appendShellString(&map_file_path, filename);
+ appendPQExpBufferChar(&map_file_path, '/');
+ appendPQExpBufferStr(&map_file_path, "map.dat");
/* Create a map file (to store dboid and dbname) */
- map_file = fopen(map_file_path, PG_BINARY_W);
+ map_file = fopen(map_file_path.data, PG_BINARY_W);
if (!map_file)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
+ pg_fatal("could not open file \"%s\": %m", map_file_path.data);
}
for (i = 0; i < PQntuples(res); i++)
@@ -1693,12 +1726,16 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
*/
if (archDumpFormat != archNull)
{
+ resetPQExpBuffer(&dbfilepath);
+
+ appendShellString(&dbfilepath, db_subdir.data);
+ appendPQExpBufferChar(&dbfilepath, '/');
+ appendShellString(&dbfilepath, oid);
+
if (archDumpFormat == archCustom)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ appendPQExpBufferStr(&dbfilepath, ".dmp");
else if (archDumpFormat == archTar)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
- else
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+ appendPQExpBufferStr(&dbfilepath, ".tar");
/* Put one line entry for dboid and dbname in map file. */
fprintf(map_file, "%s %s\n", oid, dbname);
@@ -1728,26 +1765,23 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
+ ret = runPgDump(dbname, create_opts, dbfilepath.data, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ /*
+ * For non-plain mode, no need to re-open file as only once we write
+ * data into file.
+ */
+ if (filename && archDumpFormat == archNull)
{
- char global_path[MAXPGPATH];
-
- if (archDumpFormat != archNull)
- snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
- else
- snprintf(global_path, MAXPGPATH, "%s", filename);
-
- OPF = fopen(global_path, PG_BINARY_A);
+ OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- global_path);
+ filename);
}
}
@@ -1774,14 +1808,17 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s ", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
/*
* If this is not a plain format dump, then append file name and dump
* format to the pg_dump command to get archive dump.
*/
if (archDumpFormat != archNull)
{
- printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
- dbfile, create_opts);
+ appendPQExpBufferStr(&cmd, " -f ");
+ appendShellString(&cmd, dbfile);
if (archDumpFormat == archDirectory)
appendPQExpBufferStr(&cmd, " --format=directory ");
@@ -1792,9 +1829,6 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
}
else
{
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
* If we have a filename, use the undocumented plain-append pg_dump
* format.
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 6ef789cb06d..5046c00477d 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -64,8 +64,8 @@ static int read_one_statement(StringInfo inBuf, FILE *pfile);
static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
- const char *outfile);
-static void copy_or_print_global_file(const char *outfile, FILE *pfile);
+ const char *outfile, bool drop_commands);
+static void copy_or_print_global_file(const char *outfile, FILE *pfile, FILE *dfile);
static int get_dbnames_list_to_restore(PGconn *conn,
SimplePtrList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
@@ -552,7 +552,7 @@ main(int argc, char **argv)
* commands.
*/
n_errors = process_global_sql_commands(conn, inputFileSpec,
- opts->filename);
+ opts->filename, false);
if (conn)
PQfinish(conn);
@@ -1038,7 +1038,7 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
{
StringInfoData linebuf;
FILE *pfile;
- char map_file_path[MAXPGPATH];
+ PQExpBufferData map_file_path;
int count = 0;
@@ -1052,13 +1052,16 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
return 0;
}
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+ initPQExpBuffer(&map_file_path);
+ appendShellString(&map_file_path, dumpdirpath);
+ appendPQExpBufferChar(&map_file_path, '/');
+ appendPQExpBufferStr(&map_file_path, "map.dat");
/* Open map.dat file. */
- pfile = fopen(map_file_path, PG_BINARY_R);
+ pfile = fopen(map_file_path.data, PG_BINARY_R);
if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
+ pg_fatal("could not open file \"%s\": %m", map_file_path.data);
initStringInfo(&linebuf);
@@ -1086,11 +1089,11 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
/* Report error and exit if the file has any corrupted data. */
if (!OidIsValid(db_oid) || namelen <= 1)
- pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path.data,
count + 1);
pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
- dbname, db_oid, map_file_path);
+ dbname, db_oid, map_file_path.data);
dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
dbidname->oid = db_oid;
@@ -1140,7 +1143,7 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
/* If map.dat has no entries, return after processing global.dat */
if (dbname_oid_list.head == NULL)
- return process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ return process_global_sql_commands(conn, dumpdirpath, opts->filename, true);
pg_log_info(ngettext("found %d database name in \"%s\"",
"found %d database names in \"%s\"",
@@ -1173,7 +1176,7 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
db_exclude_patterns);
/* Open global.dat file and execute/append all the global sql commands. */
- n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
+ n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename, true);
/* Close the db connection as we are done with globals and patterns. */
if (conn)
@@ -1304,22 +1307,25 @@ restore_all_databases(PGconn *conn, const char *dumpdirpath,
* Returns the number of errors while processing global.dat
*/
static int
-process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
+process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile, bool drop_commands)
{
- char global_file_path[MAXPGPATH];
+ PQExpBufferData global_file_path;
PGresult *result;
StringInfoData sqlstatement,
user_create;
FILE *pfile;
int n_errors = 0;
- snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
+ initPQExpBuffer(&global_file_path);
+ appendShellString(&global_file_path, dumpdirpath);
+ appendPQExpBufferChar(&global_file_path, '/');
+ appendPQExpBufferStr(&global_file_path, "global.dat");
/* Open global.dat file. */
- pfile = fopen(global_file_path, PG_BINARY_R);
+ pfile = fopen(global_file_path.data, PG_BINARY_R);
if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", global_file_path);
+ pg_fatal("could not open file \"%s\": %m", global_file_path.data);
/*
* If outfile is given, then just copy all global.dat file data into
@@ -1327,7 +1333,20 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
*/
if (outfile)
{
- copy_or_print_global_file(outfile, pfile);
+ FILE *dfile = NULL;
+
+ if (drop_commands)
+ {
+ resetPQExpBuffer(&global_file_path);
+ appendShellString(&global_file_path, dumpdirpath);
+ appendPQExpBufferChar(&global_file_path, '/');
+ appendPQExpBufferStr(&global_file_path, "global_drop.dat");
+
+ /* Open global_drop.dat file. */
+ dfile = fopen(global_file_path.data, PG_BINARY_R);
+ }
+
+ copy_or_print_global_file(outfile, pfile, dfile);
return 0;
}
@@ -1341,36 +1360,56 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
appendStringInfoString(&user_create, PQuser(conn));
appendStringInfoChar(&user_create, ';');
- /* Process file till EOF and execute sql statements. */
- while (read_one_statement(&sqlstatement, pfile) != EOF)
+ while(true)
{
- /* don't try to create the role we are connected as */
- if (strstr(sqlstatement.data, user_create.data))
- continue;
+ /* Process file till EOF and execute sql statements. */
+ while (read_one_statement(&sqlstatement, pfile) != EOF)
+ {
+ /* don't try to create the role we are connected as */
+ if (strstr(sqlstatement.data, user_create.data))
+ continue;
+
+ pg_log_info("executing query: %s", sqlstatement.data);
+ result = PQexec(conn, sqlstatement.data);
+
+ switch (PQresultStatus(result))
+ {
+ case PGRES_COMMAND_OK:
+ case PGRES_TUPLES_OK:
+ case PGRES_EMPTY_QUERY:
+ break;
+ default:
+ n_errors++;
+ pg_log_error("could not execute query: %s", PQerrorMessage(conn));
+ pg_log_error_detail("Command was: %s", sqlstatement.data);
+ }
+ PQclear(result);
+ }
- pg_log_info("executing query: %s", sqlstatement.data);
- result = PQexec(conn, sqlstatement.data);
+ fclose(pfile);
- switch (PQresultStatus(result))
+ if (drop_commands)
{
- case PGRES_COMMAND_OK:
- case PGRES_TUPLES_OK:
- case PGRES_EMPTY_QUERY:
- break;
- default:
- n_errors++;
- pg_log_error("could not execute query: %s", PQerrorMessage(conn));
- pg_log_error_detail("Command was: %s", sqlstatement.data);
+ drop_commands = false;
+ resetPQExpBuffer(&global_file_path);
+ appendShellString(&global_file_path, dumpdirpath);
+ appendPQExpBufferChar(&global_file_path, '/');
+ appendPQExpBufferStr(&global_file_path, "global_drop.dat");
+
+ /* Open global_drop.dat file. */
+ pfile = fopen(global_file_path.data, PG_BINARY_R);
+
+ if (pfile)
+ continue;
}
- PQclear(result);
+ break;
}
/* Print a summary of ignored errors during global.dat. */
if (n_errors)
pg_log_warning(ngettext("ignored %d error in file \"%s\"",
"ignored %d errors in file \"%s\"", n_errors),
- n_errors, global_file_path);
- fclose(pfile);
+ n_errors, global_file_path.data);
return n_errors;
}
@@ -1382,7 +1421,7 @@ process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
* then print commands to stdout.
*/
static void
-copy_or_print_global_file(const char *outfile, FILE *pfile)
+copy_or_print_global_file(const char *outfile, FILE *pfile, FILE *dfile)
{
char out_file_path[MAXPGPATH];
FILE *OPF;
@@ -1407,6 +1446,15 @@ copy_or_print_global_file(const char *outfile, FILE *pfile)
while ((c = fgetc(pfile)) != EOF)
fputc(c, OPF);
+ /* Append drop database commands. */
+ if (dfile)
+ {
+ while ((c = fgetc(dfile)) != EOF)
+ fputc(c, OPF);
+
+ fclose(dfile);
+ }
+
fclose(pfile);
/* Close output file. */
--
2.39.3
On 2025-07-17 Th 7:11 AM, Álvaro Herrera wrote:
On 2025-Jul-17, Mahendra Singh Thalor wrote:
To pg_restore, we are giving a dump of pg_dumpall which has a
global.dat file and we have drop commands in the global.dat file so
when we are using 'globals-only', we are dropping databases as we have
DROP commands.
As of now, we don't have any filter for global.dat file in restore. If
a user wants to restore only globals(without droping db), then they
should use 'globals-only' in pg_dumpall.
Or if we don't want to DROP databases by global.dat file, then we
should add a filter in pg_restore (hard to implement as we have SQL
commands in global.dat file).I think dropping database is dangerous and makes no practical sense;
doing it renders pg_dumpall --clean completely unusable. You're arguing
from the point of view of ease of implementation, but that doesn't help
users.
Yeah. I also agree with Noah that we should be consistent with pg_dump.
And we should err on the side of caution. If we impose a little
inconvenience on the user by requiring them to drop a database, it's
better than surprising them by dropping a database when they didn't
expect it.
There are some subtleties here. pg_restore will only issue DROP DATABASE
of you use the -C flag, even if you specify --clean, so we need to be
very careful about issuing DROP DATABASE.
I confess that all this didn't occur to me when working on the commit.
I think, for this case, we can do some
more doc changes.
Example: pg_restore --globals-only : this will restore the global.dat
file(including all drop commands). It might drop databases if any drop
commands.I don't think doc changes are useful.
Yeah, I don't think this is something that can be cured by documentation.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On 2025-07-17 Th 6:18 AM, Mahendra Singh Thalor wrote
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c +/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.Yes, we copied this from InteractiveBackend to read statements from
global.dat file.
Maybe we should ensure that identifiers with CR or LF are turned into
Unicode quoted identifiers, so each SQL statement would always only
occupy one line. Or just reject role and tablespace names with CR or LF
altogether, just as we do for database names.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Mon, Jul 21, 2025 at 04:41:03PM -0400, Andrew Dunstan wrote:
On 2025-07-17 Th 6:18 AM, Mahendra Singh Thalor wrote
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c +/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.Yes, we copied this from InteractiveBackend to read statements from
global.dat file.Maybe we should ensure that identifiers with CR or LF are turned into
Unicode quoted identifiers, so each SQL statement would always only occupy
one line.
Interesting. That might work.
Or just reject role and tablespace names with CR or LF altogether,
just as we do for database names.
There are other ways to get multi-line statements. Non-exhaustive list:
- pg_db_role_setting.setconfig
- pg_shdescription.description
- pg_shseclabel.label
- pg_tablespace.spcoptions (if we add a text option in the future)
I think this decision about lexing also ties to other unfinished open item
work of aligning "pg_dumpall -Fd;pg_restore [options]" behavior with "pg_dump
-Fd;pg_restore [options]". "pg_restore --no-privileges" should not restore
pg_tablespace.spcacl, and "pg_restore --no-comments" should not emit COMMENT
statements.
I suspect this is going to end with a structured dump like we use on the
pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't
lex text files to do its job. pg_dumpall deals with a more-limited set of
statements than pg_dump deals with, but they're not _that much_ more limited.
I won't veto a lexing-based approach if it gets the behaviors right, but I
don't have high hopes for it getting the behaviors right and staying that way.
(I almost said "pg_restore --no-owner" should not restore
pg_tablespace.spcowner, but v17 "pg_dumpall --no-owner" does restore it. One
could argue for or against aligning $SUBJECT behavior w/ v17's mistake there.)
On 2025-07-21 Mo 8:53 PM, Noah Misch wrote:
On Mon, Jul 21, 2025 at 04:41:03PM -0400, Andrew Dunstan wrote:
On 2025-07-17 Th 6:18 AM, Mahendra Singh Thalor wrote
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c +/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?
To support complex syntax, we used this code from another file.
I'm hearing that you copied this code from somewhere. Running
"git grep 'time to shut down'" suggests you copied it from
InteractiveBackend(). Is that right? I do see other similarities between
read_one_statement() and InteractiveBackend().Copying InteractiveBackend() provides negligible assurance that this is the
right subset of SQL lexing. Only single-user mode uses InteractiveBackend().
Single-user mode survives mostly as a last resort for recovering from having
reached xidStopLimit, is rarely used, and only superusers write queries to it.Yes, we copied this from InteractiveBackend to read statements from
global.dat file.Maybe we should ensure that identifiers with CR or LF are turned into
Unicode quoted identifiers, so each SQL statement would always only occupy
one line.Interesting. That might work.
Or just reject role and tablespace names with CR or LF altogether,
just as we do for database names.There are other ways to get multi-line statements. Non-exhaustive list:
- pg_db_role_setting.setconfig
- pg_shdescription.description
- pg_shseclabel.label
- pg_tablespace.spcoptions (if we add a text option in the future)I think this decision about lexing also ties to other unfinished open item
work of aligning "pg_dumpall -Fd;pg_restore [options]" behavior with "pg_dump
-Fd;pg_restore [options]". "pg_restore --no-privileges" should not restore
pg_tablespace.spcacl, and "pg_restore --no-comments" should not emit COMMENT
statements.I suspect this is going to end with a structured dump like we use on the
pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't
lex text files to do its job. pg_dumpall deals with a more-limited set of
statements than pg_dump deals with, but they're not _that much_ more limited.
I won't veto a lexing-based approach if it gets the behaviors right, but I
don't have high hopes for it getting the behaviors right and staying that way.
Yeah, that was my original idea. But maybe instead of extending the
archive mechanism, we could do something more lightweight, e.g. output
the statements as a JSON array.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Thu, Jul 17, 2025 at 03:46:41PM +0530, Mahendra Singh Thalor wrote:
On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote:
On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote:
On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote:
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote:
+/* + * get_dbnames_list_to_restore + * + * This will mark for skipping any entries from dbname_oid_list that pattern match an + * entry in the db_exclude_patterns list. + * + * Returns the number of database to be restored. + * + */ +static int +get_dbnames_list_to_restore(PGconn *conn, + SimpleOidStringList *dbname_oid_list, + SimpleStringList db_exclude_patterns) +{ + int count_db = 0; + PQExpBuffer query; + PGresult *res; + + query = createPQExpBuffer(); + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore.");When do we not have a connection here? We'd need to document this behavior
variation if it stays, but I'd prefer if we can just rely on having a
connection.Yes, we can document this behavior.
My review asked a question there. I don't see an answer to that question.
Would you answer that question?Example: if there is no active database, even postgres/template1, then
we will consider PATTEREN as NAME. This is the rare case.
In attached patch, I added one doc line also for this case.
If I change s/pg_log_info/pg_fatal/, check-world still passes. So no test is
reaching the !conn case. If one wanted to write a test that reaches the !conn
test, how would they do that?
On Wed, Jul 9, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
This drops all databases:
pg_dumpall --clean -Fd -f /tmp/dump
pg_restore -d template1 --globals-only /tmp/dumpThat didn't match my expectations given this help text:
$ pg_restore --help|grep global
-g, --globals-only restore only global objects, no databasesDatabases are global objects so due to --clean command, we are putting
drop commands in global.dat for all the databases. While restoring, we
used the "--globals-only" option so we are dropping all these
databases by global.dat file.Please let us know your expectations for this specific case.
I am not sure whether pg_dumpall --clean should ever drop databases,
but it certainly shouldn't do it with --globals-only. In that case,
it's not restoring the databases, so dropping them seems
catastrophically bad.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2025-07-21 Mo 8:53 PM, Noah Misch wrote:
I suspect this is going to end with a structured dump like we use on the
pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't
lex text files to do its job. pg_dumpall deals with a more-limited set of
statements than pg_dump deals with, but they're not _that much_ more limited.
I won't veto a lexing-based approach if it gets the behaviors right, but I
don't have high hopes for it getting the behaviors right and staying that way.
I have been talking offline with Mahendra about this. I agree that we
would be better off with a structured object for globals. But the thing
that's been striking me all afternoon as I have pondered it is that we
should not be designing such an animal at this stage of the cycle.
Whatever we do we're going to be stuck supporting, so I have very
reluctantly come to the conclusion that it would probably be better to
back the feature out and have another go for PG 19.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Thu, Jul 24, 2025 at 04:33:15PM -0400, Andrew Dunstan wrote:
On 2025-07-21 Mo 8:53 PM, Noah Misch wrote:
I suspect this is going to end with a structured dump like we use on the
pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't
lex text files to do its job. pg_dumpall deals with a more-limited set of
statements than pg_dump deals with, but they're not _that much_ more limited.
I won't veto a lexing-based approach if it gets the behaviors right, but I
don't have high hopes for it getting the behaviors right and staying that way.I have been talking offline with Mahendra about this. I agree that we would
be better off with a structured object for globals. But the thing that's
been striking me all afternoon as I have pondered it is that we should not
be designing such an animal at this stage of the cycle. Whatever we do we're
going to be stuck supporting, so I have very reluctantly come to the
conclusion that it would probably be better to back the feature out and have
another go for PG 19.
That makes sense to me. It would be quite a sprint to get this done in time,
and that wouldn't leave much room for additional testing and feedback before
the final release. I agree with the reluctance and with the conclusion.
On 2025-07-25 Fr 12:21 PM, Noah Misch wrote:
On Thu, Jul 24, 2025 at 04:33:15PM -0400, Andrew Dunstan wrote:
On 2025-07-21 Mo 8:53 PM, Noah Misch wrote:
I suspect this is going to end with a structured dump like we use on the
pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't
lex text files to do its job. pg_dumpall deals with a more-limited set of
statements than pg_dump deals with, but they're not _that much_ more limited.
I won't veto a lexing-based approach if it gets the behaviors right, but I
don't have high hopes for it getting the behaviors right and staying that way.I have been talking offline with Mahendra about this. I agree that we would
be better off with a structured object for globals. But the thing that's
been striking me all afternoon as I have pondered it is that we should not
be designing such an animal at this stage of the cycle. Whatever we do we're
going to be stuck supporting, so I have very reluctantly come to the
conclusion that it would probably be better to back the feature out and have
another go for PG 19.That makes sense to me. It would be quite a sprint to get this done in time,
and that wouldn't leave much room for additional testing and feedback before
the final release. I agree with the reluctance and with the conclusion.
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at all.
Then we could introduce a structured object that pg_restore could safely
use for release 19, and I think we'd still have something useful for
release 18.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Andrew Dunstan <andrew@dunslane.net> writes:
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at all.
Then we could introduce a structured object that pg_restore could safely
use for release 19, and I think we'd still have something useful for
release 18.
I dunno ... that seems like a pretty weird behavior. People would
have to do a separate text-mode "pg_dumpall -g" and remember to
restore that too. Admittedly, this could be more convenient than
"pg_dumpall -g" plus separately pg_dump'ing each database, which is
what people have to do today if they want anything smarter than a flat
text dumpfile. But it still seems like a hack --- and it would not be
compatible with v19, where presumably "pg_dumpall | pg_restore"
*would* restore globals. I think that the prospect of changing
dump/restore scripts and then having to change them again in v19
isn't too appetizing.
regards, tom lane
On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at all.
Then we could introduce a structured object that pg_restore could safely
use for release 19, and I think we'd still have something useful for
release 18.I dunno ... that seems like a pretty weird behavior. People would
have to do a separate text-mode "pg_dumpall -g" and remember to
restore that too. Admittedly, this could be more convenient than
"pg_dumpall -g" plus separately pg_dump'ing each database, which is
what people have to do today if they want anything smarter than a flat
text dumpfile. But it still seems like a hack --- and it would not be
compatible with v19, where presumably "pg_dumpall | pg_restore"
*would* restore globals. I think that the prospect of changing
dump/restore scripts and then having to change them again in v19
isn't too appetizing.
+1
On 2025-07-27 Su 7:56 PM, Noah Misch wrote:
On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at all.
Then we could introduce a structured object that pg_restore could safely
use for release 19, and I think we'd still have something useful for
release 18.I dunno ... that seems like a pretty weird behavior. People would
have to do a separate text-mode "pg_dumpall -g" and remember to
restore that too. Admittedly, this could be more convenient than
"pg_dumpall -g" plus separately pg_dump'ing each database, which is
what people have to do today if they want anything smarter than a flat
text dumpfile. But it still seems like a hack --- and it would not be
compatible with v19, where presumably "pg_dumpall | pg_restore"
*would* restore globals. I think that the prospect of changing
dump/restore scripts and then having to change them again in v19
isn't too appetizing.+1
OK, got it. Will revert.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2025-07-28 Mo 8:04 AM, Andrew Dunstan wrote:
On 2025-07-27 Su 7:56 PM, Noah Misch wrote:
On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at
all.
Then we could introduce a structured object that pg_restore could
safely
use for release 19, and I think we'd still have something useful for
release 18.I dunno ... that seems like a pretty weird behavior. People would
have to do a separate text-mode "pg_dumpall -g" and remember to
restore that too. Admittedly, this could be more convenient than
"pg_dumpall -g" plus separately pg_dump'ing each database, which is
what people have to do today if they want anything smarter than a flat
text dumpfile. But it still seems like a hack --- and it would not be
compatible with v19, where presumably "pg_dumpall | pg_restore"
*would* restore globals. I think that the prospect of changing
dump/restore scripts and then having to change them again in v19
isn't too appetizing.+1
OK, got it. Will revert.
here's a reversion patch for master. It applies cleanly to release 18 as
well. Thanks to Mahendra Singh Thalor for helping me sanity check it
(Any issues are of course my responsibility)
I'll work on pulling the entry out of the release notes.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
dumpall-nontext-revert.patchtext/x-patch; charset=UTF-8; name=dumpall-nontext-revert.patchDownload
commit 77d9ccbce21
Author: Andrew Dunstan <andrew@dunslane.net>
Date: Tue Jul 29 15:54:15 2025 -0400
Revert Non text modes for pg_dumpall, and pg_restore support
Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.
This reverts parts or all of the following commits:
1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore
5db3bf7391d Clean up from commit 1495eff7bdb
289f74d0cb2 Add more TAP tests for pg_dumpall
2ef57908067 Fix a couple of error messages and tests for them
b52a4a5f285 Clean up error messages from 1495eff7bdb
4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall
22cb6d28950 Fix memory leak in pg_restore.c
928394b664b Improve various new-to-v18 appendStringInfo calls
39729ec01d2 Fix fat fingering in 22cb6d28950
5822bf21d50 Add missing space in pg_restore documentation.
f09088a01d3 Free memory properly in pg_restore.c
40b9c27014d pg_restore cleanups
4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char.
88e947136b4 Fix typos and grammar in the code
f60420cff66 doc: Alphabetize long options for pg_dump[all].
bc35adee8d7 doc: Put new options in consistent order on man pages
a876464abc7 Message style improvements
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology
0ebd2425558 Run pgperltidy
Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8ca68da5a55..f4cbc8288e3 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,10 +16,7 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
-
- <refpurpose>
- export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
- </refpurpose>
+ <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -36,7 +33,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into an SQL script file or an archive. The output contains
+ of a cluster into one script file. The script file contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -55,16 +52,11 @@ PostgreSQL documentation
</para>
<para>
- Plain text SQL scripts will be written to the standard output. Use the
+ The SQL script will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
- <para>
- Archives in other formats will be placed in a directory named using the
- <option>-f</option>/<option>--file</option>, which is required in this case.
- </para>
-
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -129,85 +121,10 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
- Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
- <varlistentry>
- <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
- <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
- <listitem>
- <para>
- Specify the format of dump files. In plain format, all the dump data is
- sent in a single text stream. This is the default.
-
- In all other modes, <application>pg_dumpall</application> first creates two files:
- <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
- specified by <option>--file</option>.
- The first file contains global data, such as roles and tablespaces. The second
- contains a mapping between database oids and names. These files are used by
- <application>pg_restore</application>. Data for individual databases is placed in
- <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
-
- <variablelist>
- <varlistentry>
- <term><literal>d</literal></term>
- <term><literal>directory</literal></term>
- <listitem>
- <para>
- Output directory-format archives for each database,
- suitable for input into pg_restore. The directory
- will have database <type>oid</type> as its name.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>p</literal></term>
- <term><literal>plain</literal></term>
- <listitem>
- <para>
- Output a plain-text SQL script file (the default).
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>c</literal></term>
- <term><literal>custom</literal></term>
- <listitem>
- <para>
- Output a custom-format archive for each database,
- suitable for input into pg_restore. The archive
- will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
- <type>oid</type> of the database.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
- <term><literal>t</literal></term>
- <term><literal>tar</literal></term>
- <listitem>
- <para>
- Output a tar-format archive for each database,
- suitable for input into pg_restore. The archive
- will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
- <type>oid</type> of the database.
- </para>
- </listitem>
- </varlistentry>
-
- </variablelist>
-
- Note: see <xref linkend="app-pgdump"/> for details
- of how the various non plain text archives work.
-
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index b649bd3a5ae..2abe05d47e9 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,9 +18,8 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore <productname>PostgreSQL</productname> databases from archives
- created by <application>pg_dump</application> or
- <application>pg_dumpall</application>
+ restore a <productname>PostgreSQL</productname> database from an
+ archive file created by <application>pg_dump</application>
</refpurpose>
</refnamediv>
@@ -39,14 +38,13 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database or cluster from an archive
- created by <xref linkend="app-pgdump"/> or
- <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database from an archive
+ created by <xref linkend="app-pgdump"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database or cluster to the state it was in at the time it was saved. The
- archives also allow <application>pg_restore</application> to
+ database to the state it was in at the time it was saved. The
+ archive files also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive formats are designed to be
+ prior to being restored. The archive files are designed to be
portable across architectures.
</para>
@@ -54,17 +52,10 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database.
- When restoring from a dump made by <application>pg_dumpall</application>,
- each database will be created and then the restoration will be run in that
- database.
-
- Otherwise, when a database name is not specified, a script containing the SQL
- commands necessary to rebuild the database or cluster is created and written
+ the database. Otherwise, a script containing the SQL
+ commands necessary to rebuild the database is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application> or
- <application>pg_dumpall</application>.
-
+ the plain text output format of <application>pg_dump</application>.
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -149,8 +140,6 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
- <option>--create</option> is required when restoring multiple databases
- from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -246,19 +235,6 @@ PostgreSQL documentation
</listitem>
</varlistentry>
- <varlistentry>
- <term><option>-g</option></term>
- <term><option>--globals-only</option></term>
- <listitem>
- <para>
- Restore only global objects (roles and tablespaces), no databases.
- </para>
- <para>
- This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -603,28 +579,6 @@ PostgreSQL documentation
</listitem>
</varlistentry>
- <varlistentry>
- <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
- <listitem>
- <para>
- Do not restore databases whose name matches
- <replaceable class="parameter">pattern</replaceable>.
- Multiple patterns can be excluded by writing multiple
- <option>--exclude-database</option> switches. The
- <replaceable class="parameter">pattern</replaceable> parameter is
- interpreted as a pattern according to the same rules used by
- <application>psql</application>'s <literal>\d</literal>
- commands (see <xref linkend="app-psql-patterns"/>),
- so multiple databases can also be excluded by writing wildcard
- characters in the pattern. When using wildcards, be careful to
- quote the pattern if needed to prevent shell wildcard expansion.
- </para>
- <para>
- This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 4a4ebbd8ec9..a2233b0a1b4 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -102,7 +102,6 @@ tests += {
't/003_pg_dump_with_server.pl',
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
- 't/006_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 5974d6706fd..086adcdc502 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,16 +333,6 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
-/*
- * When pg_restore restores multiple databases, then update already added entry
- * into array for cleanup.
- */
-void
-replace_on_exit_close_archive(Archive *AHX)
-{
- shutdown_info.AHX = AHX;
-}
-
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index af0007fb6d2..4ebef1e8644 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -308,7 +308,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX, bool append_data);
+extern void RestoreArchive(Archive *AHX);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 30e0da31aa3..dce88f040ac 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -87,7 +87,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec, bool append_data);
+ const pg_compress_specification compression_spec);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,14 +339,9 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/*
- * RestoreArchive
- *
- * If append_data is set, then append data into file as we are restoring dump
- * of multiple databases which was taken by pg_dumpall.
- */
+/* Public */
void
-RestoreArchive(Archive *AHX, bool append_data)
+RestoreArchive(Archive *AHX)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -463,7 +458,7 @@ RestoreArchive(Archive *AHX, bool append_data)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
+ SetOutput(AH, ropt->filename, ropt->compression_spec);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -1302,7 +1297,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec, false);
+ SetOutput(AH, ropt->filename, out_compression_spec);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1681,8 +1676,7 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec,
- bool append_data)
+ const pg_compress_specification compression_spec)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1702,7 +1696,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (append_data || AH->mode == archModeAppend)
+ if (AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 365073b3eae..325b53fc9bd 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,7 +394,6 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
-extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index d94d0de2a5d..b5ba3b46dd9 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH, false);
+ RestoreArchive((Archive *) AH);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 6298edb26b5..f543d418e46 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1265,7 +1265,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout, false);
+ RestoreArchive(fout);
CloseArchive(fout);
@@ -1276,7 +1276,7 @@ main(int argc, char **argv)
static void
help(const char *progname)
{
- printf(_("%s exports a PostgreSQL database as an SQL script or to other formats.\n\n"), progname);
+ printf(_("%s dumps a database as a text file or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [DBNAME]\n"), progname);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 100317b1aa9..f69f0260256 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -65,10 +65,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
+static void dumpDatabases(PGconn *conn);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts,
- char *dbfile, ArchiveFormat archDumpFormat);
+static int runPgDump(const char *dbname, const char *create_opts);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -77,7 +76,6 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
-static ArchiveFormat parseDumpFormat(const char *format);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -150,7 +148,6 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
- {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -201,8 +198,6 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
- ArchiveFormat archDumpFormat = archNull;
- const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -252,7 +247,7 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -280,9 +275,7 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
- case 'F':
- formatName = pg_strdup(optarg);
- break;
+
case 'g':
globals_only = true;
break;
@@ -431,21 +424,6 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- /* Get format for dump. */
- archDumpFormat = parseDumpFormat(formatName);
-
- /*
- * If a non-plain format is specified, a file name is also required as the
- * path to the main directory.
- */
- if (archDumpFormat != archNull &&
- (!filename || strcmp(filename, "") == 0))
- {
- pg_log_error("option -F/--format=d|c|t requires option -f/--file");
- pg_log_error_hint("Try \"%s --help\" for more information.", progname);
- exit_nicely(1);
- }
-
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -510,33 +488,6 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
- /*
- * Open the output file if required, otherwise use stdout. If required,
- * then create new directory and global.dat file.
- */
- if (archDumpFormat != archNull)
- {
- char global_path[MAXPGPATH];
-
- /* Create new directory or accept the empty existing directory. */
- create_or_open_dir(filename);
-
- snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
-
- OPF = fopen(global_path, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open file \"%s\": %m", global_path);
- }
- else if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* If there was a database specified on the command line, use that,
* otherwise try to connect to database "postgres", and failing that
@@ -576,6 +527,19 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
+ /*
+ * Open the output file if required, otherwise use stdout
+ */
+ if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* Set the client encoding if requested.
*/
@@ -675,7 +639,7 @@ main(int argc, char *argv[])
}
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn, archDumpFormat);
+ dumpDatabases(conn);
PQfinish(conn);
@@ -688,7 +652,7 @@ main(int argc, char *argv[])
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync && (archDumpFormat == archNull))
+ if (dosync)
(void) fsync_fname(filename, false);
}
@@ -699,14 +663,12 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
- printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
- " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -1013,6 +975,9 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
+ if (PQntuples(res) > 0)
+ fprintf(OPF, "\n--\n-- User Configurations\n--\n");
+
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
@@ -1526,7 +1491,6 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
- static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1538,13 +1502,7 @@ dumpUserConfig(PGconn *conn, const char *username)
res = executeQuery(conn, buf->data);
if (PQntuples(res) > 0)
- {
- if (!header_done)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
- header_done = true;
-
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", username);
- }
for (int i = 0; i < PQntuples(res); i++)
{
@@ -1618,13 +1576,10 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
+dumpDatabases(PGconn *conn)
{
PGresult *res;
int i;
- char db_subdir[MAXPGPATH];
- char dbfilepath[MAXPGPATH];
- FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1638,42 +1593,18 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname, oid "
+ "SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (archDumpFormat == archNull && PQntuples(res) > 0)
+ if (PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
- /*
- * If directory/tar/custom format is specified, create a subdirectory
- * under the main directory and each database dump file or subdirectory
- * will be created in that subdirectory by pg_dump.
- */
- if (archDumpFormat != archNull)
- {
- char map_file_path[MAXPGPATH];
-
- snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
-
- /* Create a subdirectory with 'databases' name under main directory. */
- if (mkdir(db_subdir, pg_dir_create_mode) != 0)
- pg_fatal("could not create directory \"%s\": %m", db_subdir);
-
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
-
- /* Create a map file (to store dboid and dbname) */
- map_file = fopen(map_file_path, PG_BINARY_W);
- if (!map_file)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
- }
-
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
- char *oid = PQgetvalue(res, i, 1);
- const char *create_opts = "";
+ const char *create_opts;
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1687,27 +1618,9 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
continue;
}
- /*
- * If this is not a plain format dump, then append dboid and dbname to
- * the map.dat file.
- */
- if (archDumpFormat != archNull)
- {
- if (archDumpFormat == archCustom)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
- else if (archDumpFormat == archTar)
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
- else
- snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
-
- /* Put one line entry for dboid and dbname in map file. */
- fprintf(map_file, "%s %s\n", oid, dbname);
- }
-
pg_log_info("dumping database \"%s\"", dbname);
- if (archDumpFormat == archNull)
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", dbname);
/*
* We assume that "template1" and "postgres" already exist in the
@@ -1721,9 +1634,12 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
if (output_clean)
create_opts = "--clean --create";
- /* Since pg_dump won't emit a \connect command, we must */
- else if (archDumpFormat == archNull)
+ else
+ {
+ create_opts = "";
+ /* Since pg_dump won't emit a \connect command, we must */
fprintf(OPF, "\\connect %s\n\n", dbname);
+ }
}
else
create_opts = "--create";
@@ -1731,30 +1647,19 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
if (filename)
fclose(OPF);
- ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
+ ret = runPgDump(dbname, create_opts);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
if (filename)
{
- char global_path[MAXPGPATH];
-
- if (archDumpFormat != archNull)
- snprintf(global_path, MAXPGPATH, "%s/global.dat", filename);
- else
- snprintf(global_path, MAXPGPATH, "%s", filename);
-
- OPF = fopen(global_path, PG_BINARY_A);
+ OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
pg_fatal("could not re-open the output file \"%s\": %m",
- global_path);
+ filename);
}
}
- /* Close map file */
- if (archDumpFormat != archNull)
- fclose(map_file);
-
PQclear(res);
}
@@ -1764,8 +1669,7 @@ dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts, char *dbfile,
- ArchiveFormat archDumpFormat)
+runPgDump(const char *dbname, const char *create_opts)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1774,36 +1678,17 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile,
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
/*
- * If this is not a plain format dump, then append file name and dump
- * format to the pg_dump command to get archive dump.
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
*/
- if (archDumpFormat != archNull)
- {
- printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
- dbfile, create_opts);
-
- if (archDumpFormat == archDirectory)
- appendPQExpBufferStr(&cmd, " --format=directory ");
- else if (archDumpFormat == archCustom)
- appendPQExpBufferStr(&cmd, " --format=custom ");
- else if (archDumpFormat == archTar)
- appendPQExpBufferStr(&cmd, " --format=tar ");
- }
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
else
- {
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
- /*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
- */
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
- else
- appendPQExpBufferStr(&cmd, " -Fp ");
- }
+ appendPQExpBufferStr(&cmd, " -Fp ");
/*
* Append the database name to the already-constructed stem of connection
@@ -1948,36 +1833,3 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
-
-/*
- * parseDumpFormat
- *
- * This will validate dump formats.
- */
-static ArchiveFormat
-parseDumpFormat(const char *format)
-{
- ArchiveFormat archDumpFormat;
-
- if (pg_strcasecmp(format, "c") == 0)
- archDumpFormat = archCustom;
- else if (pg_strcasecmp(format, "custom") == 0)
- archDumpFormat = archCustom;
- else if (pg_strcasecmp(format, "d") == 0)
- archDumpFormat = archDirectory;
- else if (pg_strcasecmp(format, "directory") == 0)
- archDumpFormat = archDirectory;
- else if (pg_strcasecmp(format, "p") == 0)
- archDumpFormat = archNull;
- else if (pg_strcasecmp(format, "plain") == 0)
- archDumpFormat = archNull;
- else if (pg_strcasecmp(format, "t") == 0)
- archDumpFormat = archTar;
- else if (pg_strcasecmp(format, "tar") == 0)
- archDumpFormat = archTar;
- else
- pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
- format);
-
- return archDumpFormat;
-}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 6ef789cb06d..b4e1acdb63f 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump/pg_dumpall using the archiver
+ * from a backup archive created by pg_dump using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,15 +41,11 @@
#include "postgres_fe.h"
#include <ctype.h>
-#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
-#include "common/string.h"
-#include "connectdb.h"
#include "fe_utils/option_utils.h"
-#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -57,43 +53,18 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
-static bool file_exists_in_directory(const char *dir, const char *filename);
-static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data, int num);
-static int read_one_statement(StringInfo inBuf, FILE *pfile);
-static int restore_all_databases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
-static int process_global_sql_commands(PGconn *conn, const char *dumpdirpath,
- const char *outfile);
-static void copy_or_print_global_file(const char *outfile, FILE *pfile);
-static int get_dbnames_list_to_restore(PGconn *conn,
- SimplePtrList *dbname_oid_list,
- SimpleStringList db_exclude_patterns);
-static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
- SimplePtrList *dbname_oid_list);
-
-/*
- * Stores a database OID and the corresponding name.
- */
-typedef struct DbOidName
-{
- Oid oid;
- char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
-} DbOidName;
-
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
+ int exit_code;
int numWorkers = 1;
+ Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
- int n_errors = 0;
- bool globals_only = false;
- SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -119,7 +90,6 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
- {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -174,7 +144,6 @@ main(int argc, char **argv)
{"with-statistics", no_argument, &with_statistics, 1},
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
- {"exclude-database", required_argument, NULL, 6},
{NULL, 0, NULL, 0}
};
@@ -203,7 +172,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -230,14 +199,11 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
- case 'g':
- /* restore only global.dat file from directory */
- globals_only = true;
- break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
+
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -352,9 +318,6 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
- case 6: /* database patterns to skip */
- simple_string_list_append(&db_exclude_patterns, optarg);
- break;
default:
/* getopt_long already emitted a complaint */
@@ -382,13 +345,6 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
- if (db_exclude_patterns.head != NULL && globals_only)
- {
- pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
- pg_log_error_hint("Try \"%s --help\" for more information.", progname);
- exit_nicely(1);
- }
-
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -496,114 +452,6 @@ main(int argc, char **argv)
opts->formatName);
}
- /*
- * If toc.dat file is not present in the current path, then check for
- * global.dat. If global.dat file is present, then restore all the
- * databases from map.dat (if it exists), but skip restoring those
- * matching --exclude-database patterns.
- */
- if (inputFileSpec != NULL && !file_exists_in_directory(inputFileSpec, "toc.dat") &&
- file_exists_in_directory(inputFileSpec, "global.dat"))
- {
- PGconn *conn = NULL; /* Connection to restore global sql
- * commands. */
-
- /*
- * Can only use --list or --use-list options with a single database
- * dump.
- */
- if (opts->tocSummary)
- pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
- else if (opts->tocFile)
- pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
-
- /*
- * To restore from a pg_dumpall archive, -C (create database) option
- * must be specified unless we are only restoring globals.
- */
- if (!globals_only && opts->createDB != 1)
- {
- pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
- pg_log_error_hint("Try \"%s --help\" for more information.", progname);
- pg_log_error_hint("Individual databases can be restored using their specific archives.");
- exit_nicely(1);
- }
-
- /*
- * Connect to the database to execute global sql commands from
- * global.dat file.
- */
- if (opts->cparams.dbname)
- {
- conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
-
-
- if (!conn)
- pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
- }
-
- /* If globals-only, then return from here. */
- if (globals_only)
- {
- /*
- * Open global.dat file and execute/append all the global sql
- * commands.
- */
- n_errors = process_global_sql_commands(conn, inputFileSpec,
- opts->filename);
-
- if (conn)
- PQfinish(conn);
-
- pg_log_info("database restoring skipped because option -g/--globals-only was specified");
- }
- else
- {
- /* Now restore all the databases from map.dat */
- n_errors = restore_all_databases(conn, inputFileSpec, db_exclude_patterns,
- opts, numWorkers);
- }
-
- /* Free db pattern list. */
- simple_string_list_destroy(&db_exclude_patterns);
- }
- else /* process if global.dat file does not exist. */
- {
- if (db_exclude_patterns.head != NULL)
- pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
-
- if (globals_only)
- pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
-
- n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0);
- }
-
- /* Done, print a summary of ignored errors during restore. */
- if (n_errors)
- {
- pg_log_warning("errors ignored on restore: %d", n_errors);
- return 1;
- }
-
- return 0;
-}
-
-/*
- * restore_one_database
- *
- * This will restore one database using toc.dat file.
- *
- * returns the number of errors while doing restore.
- */
-static int
-restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, bool append_data, int num)
-{
- Archive *AH;
- int n_errors;
-
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -611,15 +459,9 @@ restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op. If we are
- * restoring multiple databases, then only update AX handle for cleanup as
- * the previous entry was already in the array and we had closed previous
- * connection, so we can use the same array slot.
+ * it's still NULL, the cleanup function will just be a no-op.
*/
- if (!append_data || num == 0)
- on_exit_close_archive(AH);
- else
- replace_on_exit_close_archive(AH);
+ on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -639,21 +481,25 @@ restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH, append_data);
+ RestoreArchive(AH);
}
- n_errors = AH->n_errors;
+ /* done, print a summary of ignored errors */
+ if (AH->n_errors)
+ pg_log_warning("errors ignored on restore: %d", AH->n_errors);
/* AH may be freed in CloseArchive? */
+ exit_code = AH->n_errors ? 1 : 0;
+
CloseArchive(AH);
- return n_errors;
+ return exit_code;
}
static void
usage(const char *progname)
{
- printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
+ printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -671,7 +517,6 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
- printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -688,7 +533,6 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
- printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -725,8 +569,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
- "combined and specified multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
+ "multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -831,585 +675,3 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
-
-/*
- * file_exists_in_directory
- *
- * Returns true if the file exists in the given directory.
- */
-static bool
-file_exists_in_directory(const char *dir, const char *filename)
-{
- struct stat st;
- char buf[MAXPGPATH];
-
- if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
- pg_fatal("directory name too long: \"%s\"", dir);
-
- return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
-}
-
-/*
- * read_one_statement
- *
- * This will start reading from passed file pointer using fgetc and read till
- * semicolon(sql statement terminator for global.dat file)
- *
- * EOF is returned if end-of-file input is seen; time to shut down.
- */
-
-static int
-read_one_statement(StringInfo inBuf, FILE *pfile)
-{
- int c; /* character read from getc() */
- int m;
-
- StringInfoData q;
-
- initStringInfo(&q);
-
- resetStringInfo(inBuf);
-
- /*
- * Read characters until EOF or the appropriate delimiter is seen.
- */
- while ((c = fgetc(pfile)) != EOF)
- {
- if (c != '\'' && c != '"' && c != '\n' && c != ';')
- {
- appendStringInfoChar(inBuf, (char) c);
- while ((c = fgetc(pfile)) != EOF)
- {
- if (c != '\'' && c != '"' && c != ';' && c != '\n')
- appendStringInfoChar(inBuf, (char) c);
- else
- break;
- }
- }
-
- if (c == '\'' || c == '"')
- {
- appendStringInfoChar(&q, (char) c);
- m = c;
-
- while ((c = fgetc(pfile)) != EOF)
- {
- appendStringInfoChar(&q, (char) c);
-
- if (c == m)
- {
- appendStringInfoString(inBuf, q.data);
- resetStringInfo(&q);
- break;
- }
- }
- }
-
- if (c == ';')
- {
- appendStringInfoChar(inBuf, (char) ';');
- break;
- }
-
- if (c == '\n')
- appendStringInfoChar(inBuf, (char) '\n');
- }
-
- pg_free(q.data);
-
- /* No input before EOF signal means time to quit. */
- if (c == EOF && inBuf->len == 0)
- return EOF;
-
- /* return something that's not EOF */
- return 'Q';
-}
-
-/*
- * get_dbnames_list_to_restore
- *
- * This will mark for skipping any entries from dbname_oid_list that pattern match an
- * entry in the db_exclude_patterns list.
- *
- * Returns the number of database to be restored.
- *
- */
-static int
-get_dbnames_list_to_restore(PGconn *conn,
- SimplePtrList *dbname_oid_list,
- SimpleStringList db_exclude_patterns)
-{
- int count_db = 0;
- PQExpBuffer query;
- PGresult *res;
-
- query = createPQExpBuffer();
-
- if (!conn)
- pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
-
- /*
- * Process one by one all dbnames and if specified to skip restoring, then
- * remove dbname from list.
- */
- for (SimplePtrListCell *db_cell = dbname_oid_list->head;
- db_cell; db_cell = db_cell->next)
- {
- DbOidName *dbidname = (DbOidName *) db_cell->ptr;
- bool skip_db_restore = false;
- PQExpBuffer db_lit = createPQExpBuffer();
-
- appendStringLiteralConn(db_lit, dbidname->str, conn);
-
- for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
- {
- /*
- * If there is an exact match then we don't need to try a pattern
- * match
- */
- if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
- skip_db_restore = true;
- /* Otherwise, try a pattern match if there is a connection */
- else if (conn)
- {
- int dotcnt;
-
- appendPQExpBufferStr(query, "SELECT 1 ");
- processSQLNamePattern(conn, query, pat_cell->val, false,
- false, NULL, db_lit->data,
- NULL, NULL, NULL, &dotcnt);
-
- if (dotcnt > 0)
- {
- pg_log_error("improper qualified name (too many dotted names): %s",
- dbidname->str);
- PQfinish(conn);
- exit_nicely(1);
- }
-
- res = executeQuery(conn, query->data);
-
- if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
- {
- skip_db_restore = true;
- pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
- }
-
- PQclear(res);
- resetPQExpBuffer(query);
- }
-
- if (skip_db_restore)
- break;
- }
-
- destroyPQExpBuffer(db_lit);
-
- /*
- * Mark db to be skipped or increment the counter of dbs to be
- * restored
- */
- if (skip_db_restore)
- {
- pg_log_info("excluding database \"%s\"", dbidname->str);
- dbidname->oid = InvalidOid;
- }
- else
- {
- count_db++;
- }
- }
-
- destroyPQExpBuffer(query);
-
- return count_db;
-}
-
-/*
- * get_dbname_oid_list_from_mfile
- *
- * Open map.dat file and read line by line and then prepare a list of database
- * names and corresponding db_oid.
- *
- * Returns, total number of database names in map.dat file.
- */
-static int
-get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
-{
- StringInfoData linebuf;
- FILE *pfile;
- char map_file_path[MAXPGPATH];
- int count = 0;
-
-
- /*
- * If there is only global.dat file in dump, then return from here as
- * there is no database to restore.
- */
- if (!file_exists_in_directory(dumpdirpath, "map.dat"))
- {
- pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
- return 0;
- }
-
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
-
- /* Open map.dat file. */
- pfile = fopen(map_file_path, PG_BINARY_R);
-
- if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
-
- initStringInfo(&linebuf);
-
- /* Append all the dbname/db_oid combinations to the list. */
- while (pg_get_line_buf(pfile, &linebuf))
- {
- Oid db_oid = InvalidOid;
- char *dbname;
- DbOidName *dbidname;
- int namelen;
- char *p = linebuf.data;
-
- /* Extract dboid. */
- while (isdigit((unsigned char) *p))
- p++;
- if (p > linebuf.data && *p == ' ')
- {
- sscanf(linebuf.data, "%u", &db_oid);
- p++;
- }
-
- /* dbname is the rest of the line */
- dbname = p;
- namelen = strlen(dbname);
-
- /* Report error and exit if the file has any corrupted data. */
- if (!OidIsValid(db_oid) || namelen <= 1)
- pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
- count + 1);
-
- pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
- dbname, db_oid, map_file_path);
-
- dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
- dbidname->oid = db_oid;
- strlcpy(dbidname->str, dbname, namelen);
-
- simple_ptr_list_append(dbname_oid_list, dbidname);
- count++;
- }
-
- /* Close map.dat file. */
- fclose(pfile);
-
- return count;
-}
-
-/*
- * restore_all_databases
- *
- * This will restore databases those dumps are present in
- * directory based on map.dat file mapping.
- *
- * This will skip restoring for databases that are specified with
- * exclude-database option.
- *
- * returns, number of errors while doing restore.
- */
-static int
-restore_all_databases(PGconn *conn, const char *dumpdirpath,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts,
- int numWorkers)
-{
- SimplePtrList dbname_oid_list = {NULL, NULL};
- int num_db_restore = 0;
- int num_total_db;
- int n_errors_total;
- int count = 0;
- char *connected_db = NULL;
- bool dumpData = opts->dumpData;
- bool dumpSchema = opts->dumpSchema;
- bool dumpStatistics = opts->dumpSchema;
-
- /* Save db name to reuse it for all the database. */
- if (opts->cparams.dbname)
- connected_db = opts->cparams.dbname;
-
- num_total_db = get_dbname_oid_list_from_mfile(dumpdirpath, &dbname_oid_list);
-
- /* If map.dat has no entries, return after processing global.dat */
- if (dbname_oid_list.head == NULL)
- return process_global_sql_commands(conn, dumpdirpath, opts->filename);
-
- pg_log_info(ngettext("found %d database name in \"%s\"",
- "found %d database names in \"%s\"",
- num_total_db),
- num_total_db, "map.dat");
-
- if (!conn)
- {
- pg_log_info("trying to connect to database \"%s\"", "postgres");
-
- conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
-
- /* Try with template1. */
- if (!conn)
- {
- pg_log_info("trying to connect to database \"%s\"", "template1");
-
- conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
- }
- }
-
- /*
- * filter the db list according to the exclude patterns
- */
- num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
- db_exclude_patterns);
-
- /* Open global.dat file and execute/append all the global sql commands. */
- n_errors_total = process_global_sql_commands(conn, dumpdirpath, opts->filename);
-
- /* Close the db connection as we are done with globals and patterns. */
- if (conn)
- PQfinish(conn);
-
- /* Exit if no db needs to be restored. */
- if (dbname_oid_list.head == NULL || num_db_restore == 0)
- {
- pg_log_info(ngettext("no database needs restoring out of %d database",
- "no database needs restoring out of %d databases", num_total_db),
- num_total_db);
- return n_errors_total;
- }
-
- pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
-
- /*
- * We have a list of databases to restore after processing the
- * exclude-database switch(es). Now we can restore them one by one.
- */
- for (SimplePtrListCell *db_cell = dbname_oid_list.head;
- db_cell; db_cell = db_cell->next)
- {
- DbOidName *dbidname = (DbOidName *) db_cell->ptr;
- char subdirpath[MAXPGPATH];
- char subdirdbpath[MAXPGPATH];
- char dbfilename[MAXPGPATH];
- int n_errors;
-
- /* ignore dbs marked for skipping */
- if (dbidname->oid == InvalidOid)
- continue;
-
- /*
- * We need to reset override_dbname so that objects can be restored
- * into an already created database. (used with -d/--dbname option)
- */
- if (opts->cparams.override_dbname)
- {
- pfree(opts->cparams.override_dbname);
- opts->cparams.override_dbname = NULL;
- }
-
- snprintf(subdirdbpath, MAXPGPATH, "%s/databases", dumpdirpath);
-
- /*
- * Look for the database dump file/dir. If there is an {oid}.tar or
- * {oid}.dmp file, use it. Otherwise try to use a directory called
- * {oid}
- */
- snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
- if (file_exists_in_directory(subdirdbpath, dbfilename))
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", dumpdirpath, dbidname->oid);
- else
- {
- snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
-
- if (file_exists_in_directory(subdirdbpath, dbfilename))
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", dumpdirpath, dbidname->oid);
- else
- snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", dumpdirpath, dbidname->oid);
- }
-
- pg_log_info("restoring database \"%s\"", dbidname->str);
-
- /* If database is already created, then don't set createDB flag. */
- if (opts->cparams.dbname)
- {
- PGconn *test_conn;
-
- test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
- if (test_conn)
- {
- PQfinish(test_conn);
-
- /* Use already created database for connection. */
- opts->createDB = 0;
- opts->cparams.dbname = dbidname->str;
- }
- else
- {
- /* we'll have to create it */
- opts->createDB = 1;
- opts->cparams.dbname = connected_db;
- }
- }
-
- /*
- * Reset flags - might have been reset in pg_backup_archiver.c by the
- * previous restore.
- */
- opts->dumpData = dumpData;
- opts->dumpSchema = dumpSchema;
- opts->dumpStatistics = dumpStatistics;
-
- /* Restore the single database. */
- n_errors = restore_one_database(subdirpath, opts, numWorkers, true, count);
-
- /* Print a summary of ignored errors during single database restore. */
- if (n_errors)
- {
- n_errors_total += n_errors;
- pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
- }
-
- count++;
- }
-
- /* Log number of processed databases. */
- pg_log_info("number of restored databases is %d", num_db_restore);
-
- /* Free dbname and dboid list. */
- simple_ptr_list_destroy(&dbname_oid_list);
-
- return n_errors_total;
-}
-
-/*
- * process_global_sql_commands
- *
- * Open global.dat and execute or copy the sql commands one by one.
- *
- * If outfile is not NULL, copy all sql commands into outfile rather than
- * executing them.
- *
- * Returns the number of errors while processing global.dat
- */
-static int
-process_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *outfile)
-{
- char global_file_path[MAXPGPATH];
- PGresult *result;
- StringInfoData sqlstatement,
- user_create;
- FILE *pfile;
- int n_errors = 0;
-
- snprintf(global_file_path, MAXPGPATH, "%s/global.dat", dumpdirpath);
-
- /* Open global.dat file. */
- pfile = fopen(global_file_path, PG_BINARY_R);
-
- if (pfile == NULL)
- pg_fatal("could not open file \"%s\": %m", global_file_path);
-
- /*
- * If outfile is given, then just copy all global.dat file data into
- * outfile.
- */
- if (outfile)
- {
- copy_or_print_global_file(outfile, pfile);
- return 0;
- }
-
- /* Init sqlstatement to append commands. */
- initStringInfo(&sqlstatement);
-
- /* creation statement for our current role */
- initStringInfo(&user_create);
- appendStringInfoString(&user_create, "CREATE ROLE ");
- /* should use fmtId here, but we don't know the encoding */
- appendStringInfoString(&user_create, PQuser(conn));
- appendStringInfoChar(&user_create, ';');
-
- /* Process file till EOF and execute sql statements. */
- while (read_one_statement(&sqlstatement, pfile) != EOF)
- {
- /* don't try to create the role we are connected as */
- if (strstr(sqlstatement.data, user_create.data))
- continue;
-
- pg_log_info("executing query: %s", sqlstatement.data);
- result = PQexec(conn, sqlstatement.data);
-
- switch (PQresultStatus(result))
- {
- case PGRES_COMMAND_OK:
- case PGRES_TUPLES_OK:
- case PGRES_EMPTY_QUERY:
- break;
- default:
- n_errors++;
- pg_log_error("could not execute query: %s", PQerrorMessage(conn));
- pg_log_error_detail("Command was: %s", sqlstatement.data);
- }
- PQclear(result);
- }
-
- /* Print a summary of ignored errors during global.dat. */
- if (n_errors)
- pg_log_warning(ngettext("ignored %d error in file \"%s\"",
- "ignored %d errors in file \"%s\"", n_errors),
- n_errors, global_file_path);
- fclose(pfile);
-
- return n_errors;
-}
-
-/*
- * copy_or_print_global_file
- *
- * Copy global.dat into the output file. If "-" is used as outfile,
- * then print commands to stdout.
- */
-static void
-copy_or_print_global_file(const char *outfile, FILE *pfile)
-{
- char out_file_path[MAXPGPATH];
- FILE *OPF;
- int c;
-
- /* "-" is used for stdout. */
- if (strcmp(outfile, "-") == 0)
- OPF = stdout;
- else
- {
- snprintf(out_file_path, MAXPGPATH, "%s", outfile);
- OPF = fopen(out_file_path, PG_BINARY_W);
-
- if (OPF == NULL)
- {
- fclose(pfile);
- pg_fatal("could not open file: \"%s\"", outfile);
- }
- }
-
- /* Append global.dat into output file or print to stdout. */
- while ((c = fgetc(pfile)) != EOF)
- fputc(c, OPF);
-
- fclose(pfile);
-
- /* Close output file. */
- if (strcmp(outfile, "-") != 0)
- fclose(OPF);
-}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index c3c5fae11ea..37d893d5e6a 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,24 +237,6 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
-command_fails_like(
- [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
- qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
- 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
-);
-
-command_fails_like(
- [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
- qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
- 'When option --exclude-database is used in pg_restore with dump of pg_dump'
-);
-
-command_fails_like(
- [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
- qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
- 'When option --globals-only is not used in pg_restore with dump of pg_dump'
-);
-
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -262,8 +244,4 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
-command_fails_like(
- [ 'pg_dumpall', '--format', 'x' ],
- qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
- 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
deleted file mode 100644
index c274b777586..00000000000
--- a/src/bin/pg_dump/t/006_pg_dumpall.pl
+++ /dev/null
@@ -1,400 +0,0 @@
-# Copyright (c) 2021-2025, PostgreSQL Global Development Group
-
-use strict;
-use warnings FATAL => 'all';
-
-use PostgreSQL::Test::Cluster;
-use PostgreSQL::Test::Utils;
-use Test::More;
-
-my $tempdir = PostgreSQL::Test::Utils::tempdir;
-my $run_db = 'postgres';
-my $sep = $windows_os ? "\\" : "/";
-
-# Tablespace locations used by "restore_tablespace" test case.
-my $tablespace1 = "${tempdir}${sep}tbl1";
-my $tablespace2 = "${tempdir}${sep}tbl2";
-mkdir($tablespace1) || die "mkdir $tablespace1 $!";
-mkdir($tablespace2) || die "mkdir $tablespace2 $!";
-
-# Scape tablespace locations on Windows.
-$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
-$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
-
-# Where pg_dumpall will be executed.
-my $node = PostgreSQL::Test::Cluster->new('node');
-$node->init;
-$node->start;
-
-
-###############################################################
-# Definition of the pg_dumpall test cases to run.
-#
-# Each of these test cases are named and those names are used for fail
-# reporting and also to save the dump and restore information needed for the
-# test to assert.
-#
-# The "setup_sql" is a psql valid script that contains SQL commands to execute
-# before of actually execute the tests. The setups are all executed before of
-# any test execution.
-#
-# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
-# "restore_cmd" must have the --file flag to save the restore output so that we
-# can assert on it.
-#
-# The "like" and "unlike" is a regexp that is used to match the pg_restore
-# output. It must have at least one of then filled per test cases but it also
-# can have both. See "excluding_databases" test case for example.
-my %pgdumpall_runs = (
- restore_roles => {
- setup_sql => '
- CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
- CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_roles",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_roles.sql",
- "$tempdir/restore_roles",
- ],
- like => qr/
- ^\s*\QCREATE ROLE dumpall;\E\s*\n
- \s*\QALTER ROLE dumpall WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'SCRAM-SHA-256\E
- [^']+';\s*\n
- \s*\QCREATE ROLE dumpall2;\E
- \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
- /xm
- },
-
- restore_tablespace => {
- setup_sql => "
- CREATE ROLE tap;
- CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
- CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_tablespace",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_tablespace.sql",
- "$tempdir/restore_tablespace",
- ],
- # Match "E" as optional since it is added on LOCATION when running on
- # Windows.
- like => qr/^
- \n\QCREATE TABLESPACE tbl1 OWNER tap LOCATION \E(?:E)?\Q'$tablespace1';\E
- \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
- \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
- /xm,
- },
-
- restore_grants => {
- setup_sql => "
- CREATE DATABASE tapgrantsdb;
- CREATE SCHEMA private;
- CREATE SEQUENCE serial START 101;
- CREATE FUNCTION fn() RETURNS void AS \$\$
- BEGIN
- END;
- \$\$ LANGUAGE plpgsql;
- CREATE ROLE super;
- CREATE ROLE grant1;
- CREATE ROLE grant2;
- CREATE ROLE grant3;
- CREATE ROLE grant4;
- CREATE ROLE grant5;
- CREATE ROLE grant6;
- CREATE ROLE grant7;
- CREATE ROLE grant8;
-
- CREATE TABLE t (id int);
- INSERT INTO t VALUES (1), (2), (3), (4);
-
- GRANT SELECT ON TABLE t TO grant1;
- GRANT INSERT ON TABLE t TO grant2;
- GRANT ALL PRIVILEGES ON TABLE t to grant3;
- GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
- GRANT USAGE, CREATE ON SCHEMA private TO grant5;
- GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
- GRANT super TO grant7;
- GRANT EXECUTE ON FUNCTION fn() TO grant8;
- ",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_grants",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'directory',
- '--file' => "$tempdir/restore_grants.sql",
- "$tempdir/restore_grants",
- ],
- like => qr/^
- \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
- (.*\n)*
- \n\QGRANT ALL ON SCHEMA private TO grant5;\E
- (.*\n)*
- \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
- (.*\n)*
- \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
- (.*\n)*
- \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
- \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
- \n\QGRANT ALL ON TABLE public.t TO grant3;\E
- (.*\n)*
- \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
- /xm,
- },
-
- excluding_databases => {
- setup_sql => 'CREATE DATABASE db1;
- \c db1
- CREATE TABLE t1 (id int);
- INSERT INTO t1 VALUES (1), (2), (3), (4);
- CREATE TABLE t2 (id int);
- INSERT INTO t2 VALUES (1), (2), (3), (4);
-
- CREATE DATABASE db2;
- \c db2
- CREATE TABLE t3 (id int);
- INSERT INTO t3 VALUES (1), (2), (3), (4);
- CREATE TABLE t4 (id int);
- INSERT INTO t4 VALUES (1), (2), (3), (4);
-
- CREATE DATABASE dbex3;
- \c dbex3
- CREATE TABLE t5 (id int);
- INSERT INTO t5 VALUES (1), (2), (3), (4);
- CREATE TABLE t6 (id int);
- INSERT INTO t6 VALUES (1), (2), (3), (4);
-
- CREATE DATABASE dbex4;
- \c dbex4
- CREATE TABLE t7 (id int);
- INSERT INTO t7 VALUES (1), (2), (3), (4);
- CREATE TABLE t8 (id int);
- INSERT INTO t8 VALUES (1), (2), (3), (4);
-
- CREATE DATABASE db5;
- \c db5
- CREATE TABLE t9 (id int);
- INSERT INTO t9 VALUES (1), (2), (3), (4);
- CREATE TABLE t10 (id int);
- INSERT INTO t10 VALUES (1), (2), (3), (4);
- ',
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--file' => "$tempdir/excluding_databases",
- '--exclude-database' => 'dbex*',
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'directory',
- '--file' => "$tempdir/excluding_databases.sql",
- '--exclude-database' => 'db5',
- "$tempdir/excluding_databases",
- ],
- like => qr/^
- \n\QCREATE DATABASE db1\E
- (.*\n)*
- \n\QCREATE TABLE public.t1 (\E
- (.*\n)*
- \n\QCREATE TABLE public.t2 (\E
- (.*\n)*
- \n\QCREATE DATABASE db2\E
- (.*\n)*
- \n\QCREATE TABLE public.t3 (\E
- (.*\n)*
- \n\QCREATE TABLE public.t4 (/xm,
- unlike => qr/^
- \n\QCREATE DATABASE db3\E
- (.*\n)*
- \n\QCREATE TABLE public.t5 (\E
- (.*\n)*
- \n\QCREATE TABLE public.t6 (\E
- (.*\n)*
- \n\QCREATE DATABASE db4\E
- (.*\n)*
- \n\QCREATE TABLE public.t7 (\E
- (.*\n)*
- \n\QCREATE TABLE public.t8 (\E
- \n\QCREATE DATABASE db5\E
- (.*\n)*
- \n\QCREATE TABLE public.t9 (\E
- (.*\n)*
- \n\QCREATE TABLE public.t10 (\E
- /xm,
- },
-
- format_directory => {
- setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
- INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--file' => "$tempdir/format_directory",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'directory',
- '--file' => "$tempdir/format_directory.sql",
- "$tempdir/format_directory",
- ],
- like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
- },
-
- format_tar => {
- setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
- INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'tar',
- '--file' => "$tempdir/format_tar",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'tar',
- '--file' => "$tempdir/format_tar.sql",
- "$tempdir/format_tar",
- ],
- like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
- },
-
- format_custom => {
- setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
- INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'custom',
- '--file' => "$tempdir/format_custom",
- ],
- restore_cmd => [
- 'pg_restore', '-C',
- '--format' => 'custom',
- '--file' => "$tempdir/format_custom.sql",
- "$tempdir/format_custom",
- ],
- like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
- },
-
- dump_globals_only => {
- setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
- INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
- dump_cmd => [
- 'pg_dumpall',
- '--format' => 'directory',
- '--globals-only',
- '--file' => "$tempdir/dump_globals_only",
- ],
- restore_cmd => [
- 'pg_restore', '-C', '--globals-only',
- '--format' => 'directory',
- '--file' => "$tempdir/dump_globals_only.sql",
- "$tempdir/dump_globals_only",
- ],
- like => qr/
- ^\s*\QCREATE ROLE dumpall;\E\s*\n
- /xm
- },);
-
-# First execute the setup_sql
-foreach my $run (sort keys %pgdumpall_runs)
-{
- if ($pgdumpall_runs{$run}->{setup_sql})
- {
- $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
- }
-}
-
-# Execute the tests
-foreach my $run (sort keys %pgdumpall_runs)
-{
- # Create a new target cluster to pg_restore each test case run so that we
- # don't need to take care of the cleanup from the target cluster after each
- # run.
- my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
- $target_node->init;
- $target_node->start;
-
- # Dumpall from node cluster.
- $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
- "$run: pg_dumpall runs");
-
- # Restore the dump on "target_node" cluster.
- my @restore_cmd = (
- @{ $pgdumpall_runs{$run}->{restore_cmd} },
- '--host', $target_node->host, '--port', $target_node->port);
-
- my ($stdout, $stderr) = run_command(\@restore_cmd);
-
- # pg_restore --file output file.
- my $output_file = slurp_file("$tempdir/${run}.sql");
-
- if ( !($pgdumpall_runs{$run}->{like})
- && !($pgdumpall_runs{$run}->{unlike}))
- {
- die "missing \"like\" or \"unlike\" in test \"$run\"";
- }
-
- if ($pgdumpall_runs{$run}->{like})
- {
- like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
- }
-
- if ($pgdumpall_runs{$run}->{unlike})
- {
- unlike(
- $output_file,
- $pgdumpall_runs{$run}->{unlike},
- "should not dump $run");
- }
-}
-
-# Some negative test case with dump of pg_dumpall and restore using pg_restore
-# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
-$node->command_fails_like(
- [
- 'pg_restore',
- "$tempdir/format_custom",
- '--format' => 'custom',
- '--file' => "$tempdir/error_test.sql",
- ],
- qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
- 'When -C is not used in pg_restore with dump of pg_dumpall');
-
-# test case 2: When --list option is used with dump of pg_dumpall
-$node->command_fails_like(
- [
- 'pg_restore',
- "$tempdir/format_custom", '-C',
- '--format' => 'custom',
- '--list',
- '--file' => "$tempdir/error_test.sql",
- ],
- qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
- 'When --list is used in pg_restore with dump of pg_dumpall');
-
-# test case 3: When non-exist database is given with -d option
-$node->command_fails_like(
- [
- 'pg_restore',
- "$tempdir/format_custom", '-C',
- '--format' => 'custom',
- '-d' => 'dbpq',
- ],
- qr/\Qpg_restore: error: could not connect to database "dbpq"\E/,
- 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
-);
-
-$node->stop('fast');
-
-done_testing();
On Tue, Jul 29, 2025 at 04:09:13PM -0400, Andrew Dunstan wrote:
here's a reversion patch for master.
This reverts parts or all of the following commits:
I briefly looked through this. The biggest non-reverted part is, I think,
c1da728 "Move common pg_dump code related to connections to a new file".
Refraining from a revert of that one is defensible.
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology
@@ -1276,7 +1276,7 @@ main(int argc, char **argv) static void help(const char *progname) { - printf(_("%s exports a PostgreSQL database as an SQL script or to other formats.\n\n"), progname); + printf(_("%s dumps a database as a text file or to other formats.\n\n"), progname); printf(_("Usage:\n")); printf(_(" %s [OPTION]... [DBNAME]\n"), progname);
I think commit dec6643487b, which e.g. decided to standardize on the term
"export" for these programs, was independent of $SUBJECT.
On 2025-07-29 Tu 4:34 PM, Noah Misch wrote:
On Tue, Jul 29, 2025 at 04:09:13PM -0400, Andrew Dunstan wrote:
here's a reversion patch for master.
This reverts parts or all of the following commits:I briefly looked through this. The biggest non-reverted part is, I think,
c1da728 "Move common pg_dump code related to connections to a new file".
Refraining from a revert of that one is defensible.
Yes, that was deliberate, since we intend to use it in the same way when
we redo this.
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology @@ -1276,7 +1276,7 @@ main(int argc, char **argv) static void help(const char *progname) { - printf(_("%s exports a PostgreSQL database as an SQL script or to other formats.\n\n"), progname); + printf(_("%s dumps a database as a text file or to other formats.\n\n"), progname); printf(_("Usage:\n")); printf(_(" %s [OPTION]... [DBNAME]\n"), progname);I think commit dec6643487b, which e.g. decided to standardize on the term
"export" for these programs, was independent of $SUBJECT.
OK, thanks for looking. Will fix.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2025-07-29 Tu 4:09 PM, Andrew Dunstan wrote:
On 2025-07-28 Mo 8:04 AM, Andrew Dunstan wrote:
On 2025-07-27 Su 7:56 PM, Noah Misch wrote:
On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Before we throw the baby out with the bathwater, how about this
suggestion? pg_dumpall would continue to produce globals.dat, but it
wouldn't be processed by pg_restore, which would only restore the
individual databases. Or else we just don't produce globals.dat at
all.
Then we could introduce a structured object that pg_restore could
safely
use for release 19, and I think we'd still have something useful for
release 18.I dunno ... that seems like a pretty weird behavior. People would
have to do a separate text-mode "pg_dumpall -g" and remember to
restore that too. Admittedly, this could be more convenient than
"pg_dumpall -g" plus separately pg_dump'ing each database, which is
what people have to do today if they want anything smarter than a flat
text dumpfile. But it still seems like a hack --- and it would not be
compatible with v19, where presumably "pg_dumpall | pg_restore"
*would* restore globals. I think that the prospect of changing
dump/restore scripts and then having to change them again in v19
isn't too appetizing.+1
OK, got it. Will revert.
here's a reversion patch for master. It applies cleanly to release 18
as well. Thanks to Mahendra Singh Thalor for helping me sanity check
it (Any issues are of course my responsibility)I'll work on pulling the entry out of the release notes.
OK, now that's reverted we should discuss how to proceed. I had two
thoughts - we could use invent a JSON format for the globals, or we
could just use the existing archive format. I think the archive format
is pretty flexible, and should be able to accommodate this. The downside
is it's not humanly readable. The upside is that we don't need to do
anything special either to write it or parse it.
There might also be other reasonable options. But I think we should stay
out of the business of using custom code to parse text.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
The 18 branch is broken for apt.pg.o:
00:54:18 # Failed test 'dump_globals_only: pg_dumpall runs'
00:54:18 # at t/006_pg_dumpall.pl line 329.
00:54:18 # Tests were run but no plan was declared and done_testing() was not seen.
00:54:18 # Looks like your test exited with 2 just after 1.
00:54:18 t/006_pg_dumpall.pl ...........
00:54:18 # initializing database system by copying initdb template
00:54:18 # initializing database system by copying initdb template
00:54:18 not ok 1 - dump_globals_only: pg_dumpall runs
00:54:18 Dubious, test returned 2 (wstat 512, 0x200)
00:54:18 Failed 1/1 subtests
Devel is ok.
Christoph
On 2025-07-31 Th 5:44 AM, Christoph Berg wrote:
The 18 branch is broken for apt.pg.o:
00:54:18 # Failed test 'dump_globals_only: pg_dumpall runs'
00:54:18 # at t/006_pg_dumpall.pl line 329.
00:54:18 # Tests were run but no plan was declared and done_testing() was not seen.
00:54:18 # Looks like your test exited with 2 just after 1.
00:54:18 t/006_pg_dumpall.pl ...........
00:54:18 # initializing database system by copying initdb template
00:54:18 # initializing database system by copying initdb template
00:54:18 not ok 1 - dump_globals_only: pg_dumpall runs
00:54:18 Dubious, test returned 2 (wstat 512, 0x200)
00:54:18 Failed 1/1 subtestsDevel is ok.
That file was deleted by the revert. Maybe you have a cache problem?
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Re: Andrew Dunstan
That file was deleted by the revert. Maybe you have a cache problem?
Oh right. This was caused by our snapshot builds using the latest
tarball (if available) and putting a patch on top that. I've now
bumped the upstream version to 18~beta3, this should avoid the
problem.
Sorry for the noise, and thanks for the pointer!
Christoph
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted...
Can we close the open item for this one now? Or is there something else
remaining?
--
nathan
On 2025-07-31 Th 2:22 PM, Nathan Bossart wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted...
Can we close the open item for this one now? Or is there something else
remaining?
Thanks for the reminder. I have marked the item as fixed.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.
I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
If pg_backup_json.c emerged as a new backend to the archiver API, I'd not have
concerns about that. But a JSON format specific to $SUBJECT sounds like a
recipe for bugs.
There might also be other reasonable options. But I think we should stay out
of the business of using custom code to parse text.
Agreed.
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Thanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.
Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v01-15102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v01-15102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 09cc2e11a4b8639ee81528363a9f134cb7fc78c5 Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Wed, 15 Oct 2025 22:38:14 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 23 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 599 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 599 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 22 +
src/bin/pg_dump/t/006_pg_dumpall.pl | 400 ++++++++++++++++++
14 files changed, 1671 insertions(+), 146 deletions(-)
create mode 100644 src/bin/pg_dump/t/006_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..73e166062b9 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>global.dat</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index a2233b0a1b4..4a4ebbd8ec9 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -102,6 +102,7 @@ tests += {
't/003_pg_dump_with_server.pl',
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
+ 't/006_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..d378c7b601e 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,9 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1324,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1703,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1724,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 641bece12c7..857f1cc7948 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..c9e8c15b721 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpIdLocal(void);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +127,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +159,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +209,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +221,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +261,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +274,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +292,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +334,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +450,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +525,33 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +601,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +635,123 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ pg_log_info("saving encoding = %s", encname);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "ENCODING",
+ .description = "ENCODING",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "STDSTRINGS",
+ .description = "STDSTRINGS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DEFAULT_TRANSACTION_READ_ONLY",
+ .description = "DEFAULT_TRANSACTION_READ_ONLY",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +795,41 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, archDumpFormat);
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (filename && archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +840,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +922,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -790,7 +943,7 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -799,15 +952,31 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropRoles",
+ //.owner = dba,
+ .description = "dropRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -888,7 +1057,7 @@ dumpRoles(PGconn *conn)
i_rolcomment = PQfnumber(res, "rolcomment");
i_is_current_user = PQfnumber(res, "is_current_user");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -993,7 +1162,25 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(rolename));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoles",
+ //.owner = dba,
+ .description = "dumpRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
}
/*
@@ -1001,15 +1188,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1087,7 +1272,7 @@ dumpRoleMembership(PGconn *conn)
i_inherit_option = PQfnumber(res, "inherit_option");
i_set_option = PQfnumber(res, "set_option");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role memberships\n--\n\n");
/*
@@ -1167,6 +1352,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1409,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1431,24 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleMembership",
+ //.owner = dba,
+ .description = "dumpRoleMembership_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = creaQry->data));
+ }
}
}
@@ -1260,7 +1460,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1287,7 +1488,7 @@ dumpRoleGUCPrivs(PGconn *conn)
"FROM pg_catalog.pg_parameter_acl "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1312,14 +1513,28 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleGUCPrivs",
+ //.owner = dba,
+ .description = "dumpRoleGUCPrivs_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1546,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1341,21 +1557,37 @@ dropTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropTablespaces",
+ //.owner = dba,
+ .description = "dropTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ // .dropStmt = delQry->data));
+ }
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1381,7 +1613,7 @@ dumpTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1451,7 +1683,25 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(fspcname));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpTablespaces",
+ //.owner = dba,
+ .description = "dumpTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
free(fspcname);
destroyPQExpBuffer(buf);
@@ -1481,7 +1731,7 @@ dropDBs(PGconn *conn)
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1497,9 +1747,26 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DROP_DATABASE",
+ //.owner = dba,
+ .description = "DROP_DATABASE_COMMANDS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
}
@@ -1517,6 +1784,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1532,7 +1800,9 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ if (!header_done && (archDumpFormat == archNull))
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ header_done = true;
free(sanitized);
}
@@ -1542,7 +1812,19 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpUserConfig",
+ //.owner = dba,
+ .description = "dumpUserConfig_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
}
PQclear(res);
@@ -1608,10 +1890,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1910,43 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1651,12 +1960,15 @@ dumpDatabases(PGconn *conn)
continue;
}
- pg_log_info("dumping database \"%s\"", dbname);
+ if (archDumpFormat == archNull)
+ pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
free(sanitized);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
/*
* We assume that "template1" and "postgres" already exist in the
* target installation. dropDBs() won't have removed them, for fear
@@ -1669,24 +1981,38 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2021,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2034,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2044,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1868,3 +2218,42 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+static int
+createDumpIdLocal(void)
+{
+ return ++dumpIdVal;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..1334d4fdc1e 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,60 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -316,6 +350,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
@@ -347,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +516,111 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ {
+ if (db_exclude_patterns.head != NULL)
+ pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
+
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers, append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +628,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +656,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +688,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +705,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +741,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +847,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 37d893d5e6a..c3c5fae11ea 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,24 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +262,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
new file mode 100644
index 00000000000..c274b777586
--- /dev/null
+++ b/src/bin/pg_dump/t/006_pg_dumpall.pl
@@ -0,0 +1,400 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ \s*\QALTER ROLE dumpall WITH SUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'SCRAM-SHA-256\E
+ [^']+';\s*\n
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl1 OWNER tap LOCATION \E(?:E)?\Q'$tablespace1';\E
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\Qpg_restore: error: could not connect to database "dbpq"\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.
Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v02-16102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v02-16102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From feeeb56d7c3e943cda2e608a5cb85cca8dc32edb Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Thu, 16 Oct 2025 16:13:50 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 23 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 600 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 593 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 10 +
src/bin/pg_dump/t/006_pg_dumpall.pl | 396 ++++++++++++++++++
14 files changed, 1650 insertions(+), 146 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/006_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index a2233b0a1b4..4a4ebbd8ec9 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -102,6 +102,7 @@ tests += {
't/003_pg_dump_with_server.pl',
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
+ 't/006_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..d378c7b601e 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,9 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1324,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1703,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1724,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 890db7b08c2..4501802d805 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..668e55e415c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpIdLocal(void);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +127,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +159,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +209,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +221,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +261,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +274,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +292,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +334,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +450,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +525,33 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +601,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +635,123 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ pg_log_info("saving encoding = %s", encname);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "ENCODING",
+ .description = "ENCODING",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "STDSTRINGS",
+ .description = "STDSTRINGS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DEFAULT_TRANSACTION_READ_ONLY",
+ .description = "DEFAULT_TRANSACTION_READ_ONLY",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +795,41 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, archDumpFormat);
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (filename && archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +840,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +922,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -790,7 +943,7 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -799,15 +952,31 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropRoles",
+ //.owner = dba,
+ .description = "dropRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -888,7 +1057,7 @@ dumpRoles(PGconn *conn)
i_rolcomment = PQfnumber(res, "rolcomment");
i_is_current_user = PQfnumber(res, "is_current_user");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -993,7 +1162,25 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(rolename));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoles",
+ //.owner = dba,
+ .description = "dumpRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
}
/*
@@ -1001,15 +1188,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1087,7 +1272,7 @@ dumpRoleMembership(PGconn *conn)
i_inherit_option = PQfnumber(res, "inherit_option");
i_set_option = PQfnumber(res, "set_option");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role memberships\n--\n\n");
/*
@@ -1167,6 +1352,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1409,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1431,24 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleMembership",
+ //.owner = dba,
+ .description = "dumpRoleMembership_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = creaQry->data));
+ }
}
}
@@ -1260,7 +1460,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1287,7 +1488,7 @@ dumpRoleGUCPrivs(PGconn *conn)
"FROM pg_catalog.pg_parameter_acl "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1312,14 +1513,28 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleGUCPrivs",
+ //.owner = dba,
+ .description = "dumpRoleGUCPrivs_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1546,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1341,21 +1557,37 @@ dropTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropTablespaces",
+ //.owner = dba,
+ .description = "dropTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ // .dropStmt = delQry->data));
+ }
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1381,7 +1613,7 @@ dumpTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1451,7 +1683,25 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(fspcname));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpTablespaces",
+ //.owner = dba,
+ .description = "dumpTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
free(fspcname);
destroyPQExpBuffer(buf);
@@ -1481,7 +1731,7 @@ dropDBs(PGconn *conn)
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1497,9 +1747,26 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DROP_DATABASE",
+ //.owner = dba,
+ .description = "DROP_DATABASE_COMMANDS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
}
@@ -1517,6 +1784,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1532,7 +1800,9 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ if (!header_done && (archDumpFormat == archNull))
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ header_done = true;
free(sanitized);
}
@@ -1542,7 +1812,19 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpUserConfig",
+ //.owner = dba,
+ .description = "dumpUserConfig_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
}
PQclear(res);
@@ -1608,10 +1890,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1910,43 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1651,10 +1960,14 @@ dumpDatabases(PGconn *conn)
continue;
}
- pg_log_info("dumping database \"%s\"", dbname);
+ if (archDumpFormat == archNull)
+ pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
free(sanitized);
/*
@@ -1669,24 +1982,38 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2022,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2035,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2045,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1868,3 +2219,42 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+static int
+createDumpIdLocal(void)
+{
+ return ++dumpIdVal;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..02176a77bd7 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,60 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -316,6 +350,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
@@ -347,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +516,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ {
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers, append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +622,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +650,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +682,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +699,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +735,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +841,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..56e89da1e5e
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/006_pg_dumpall.pl b/src/bin/pg_dump/t/006_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/006_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.
Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v03-28102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v03-28102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 474880773c7fc4679ea9054bde58c912ce279266 Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Tue, 28 Oct 2025 11:27:43 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
---
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 23 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 600 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 593 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 10 +
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 ++++++++++++++++++
14 files changed, 1650 insertions(+), 146 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..d378c7b601e 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,9 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1324,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1703,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1724,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 47913178a93..00ce946aab1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..668e55e415c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,8 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpIdLocal(void);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +127,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +159,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +209,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +221,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +261,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +274,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +292,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +334,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +450,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +525,33 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +601,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +635,123 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ pg_log_info("saving encoding = %s", encname);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "ENCODING",
+ .description = "ENCODING",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "STDSTRINGS",
+ .description = "STDSTRINGS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+
+ ArchiveEntry(fout, nilCatalogId, createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DEFAULT_TRANSACTION_READ_ONLY",
+ .description = "DEFAULT_TRANSACTION_READ_ONLY",
+ .section = SECTION_PRE_DATA,
+ .createStmt = qry->data));
+
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +795,41 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, archDumpFormat);
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (filename && archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +840,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +922,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -790,7 +943,7 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -799,15 +952,31 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropRoles",
+ //.owner = dba,
+ .description = "dropRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -888,7 +1057,7 @@ dumpRoles(PGconn *conn)
i_rolcomment = PQfnumber(res, "rolcomment");
i_is_current_user = PQfnumber(res, "is_current_user");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Roles\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -993,7 +1162,25 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(rolename));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoles",
+ //.owner = dba,
+ .description = "dumpRoles_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
}
/*
@@ -1001,15 +1188,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1087,7 +1272,7 @@ dumpRoleMembership(PGconn *conn)
i_inherit_option = PQfnumber(res, "inherit_option");
i_set_option = PQfnumber(res, "set_option");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role memberships\n--\n\n");
/*
@@ -1167,6 +1352,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1409,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1431,24 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleMembership",
+ //.owner = dba,
+ .description = "dumpRoleMembership_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = creaQry->data));
+ }
}
}
@@ -1260,7 +1460,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1287,7 +1488,7 @@ dumpRoleGUCPrivs(PGconn *conn)
"FROM pg_catalog.pg_parameter_acl "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1312,14 +1513,28 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpRoleGUCPrivs",
+ //.owner = dba,
+ .description = "dumpRoleGUCPrivs_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1546,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1341,21 +1557,37 @@ dropTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dropTablespaces",
+ //.owner = dba,
+ .description = "dropTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ // .dropStmt = delQry->data));
+ }
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1381,7 +1613,7 @@ dumpTablespaces(PGconn *conn)
"WHERE spcname !~ '^pg_' "
"ORDER BY 1");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && (archDumpFormat == archNull))
fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1451,7 +1683,25 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
+ if_exists ? "IF EXISTS " : "", fmtId(fspcname));
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpTablespaces",
+ //.owner = dba,
+ .description = "dumpTablespaces_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ //.dropStmt = delQry->data));
+ }
free(fspcname);
destroyPQExpBuffer(buf);
@@ -1481,7 +1731,7 @@ dropDBs(PGconn *conn)
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY datname");
- if (PQntuples(res) > 0)
+ if (PQntuples(res) > 0 && archDumpFormat == archNull)
fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
for (i = 0; i < PQntuples(res); i++)
@@ -1497,9 +1747,26 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(),
+ ARCHIVE_OPTS(.tag = "DROP_DATABASE",
+ //.owner = dba,
+ .description = "DROP_DATABASE_COMMANDS",
+ .section = SECTION_PRE_DATA,
+ .createStmt = delQry->data));
+ //.dropStmt = delQry->data));
+ }
}
}
@@ -1517,6 +1784,7 @@ dumpUserConfig(PGconn *conn, const char *username)
{
PQExpBuffer buf = createPQExpBuffer();
PGresult *res;
+ static bool header_done = false;
printfPQExpBuffer(buf, "SELECT unnest(setconfig) FROM pg_db_role_setting "
"WHERE setdatabase = 0 AND setrole = "
@@ -1532,7 +1800,9 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ if (!header_done && (archDumpFormat == archNull))
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ header_done = true;
free(sanitized);
}
@@ -1542,7 +1812,19 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ {
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpIdLocal(), /* dump ID */
+ ARCHIVE_OPTS(.tag = "dumpUserConfig",
+ //.owner = dba,
+ .description = "dumpUserConfig_des",
+ .section = SECTION_PRE_DATA,
+ .createStmt = buf->data));
+ }
}
PQclear(res);
@@ -1608,10 +1890,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1910,43 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
- if (PQntuples(res) > 0)
+ if (archDumpFormat == archNull && PQntuples(res) > 0)
fprintf(OPF, "--\n-- Databases\n--\n\n");
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
+
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1651,10 +1960,14 @@ dumpDatabases(PGconn *conn)
continue;
}
- pg_log_info("dumping database \"%s\"", dbname);
+ if (archDumpFormat == archNull)
+ pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
free(sanitized);
/*
@@ -1669,24 +1982,38 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
- else
- {
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
fprintf(OPF, "\\connect %s\n\n", dbname);
- }
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2022,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2035,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2045,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1868,3 +2219,42 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+static int
+createDumpIdLocal(void)
+{
+ return ++dumpIdVal;
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..02176a77bd7 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,60 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -316,6 +350,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
@@ -347,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +516,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ {
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers, append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +622,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +650,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +682,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +699,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +735,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +841,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..56e89da1e5e
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.
Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v04-31102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v04-31102025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From a29957e76b98f6b544ca989d4d8098a5968534fa Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Fri, 31 Oct 2025 14:45:48 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 23 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 608 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 593 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 10 +
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
14 files changed, 1658 insertions(+), 146 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..d378c7b601e 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,9 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1324,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1703,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1724,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 47913178a93..00ce946aab1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..601b9f9738e 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,9 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +128,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +160,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +210,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +222,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +262,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +275,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +293,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +335,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +451,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +526,35 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +604,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +638,115 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* create entry for restrict */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ appendPQExpBuffer(qry, "\\restrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "RESTRICT");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +790,51 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
+ else
+ {
+ /* create entry for unrestrict */
+ PQExpBuffer qry = createPQExpBuffer();
- if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ appendPQExpBuffer(qry, "\\unrestrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "UNRESTRICT");
+ destroyPQExpBuffer(qry);
+ }
- PQfinish(conn);
+ if (!globals_only && !roles_only && !tablespaces_only)
+ dumpDatabases(conn, archDumpFormat);
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +845,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +927,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -791,7 +949,12 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +962,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -889,7 +1058,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1167,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1178,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1088,7 +1263,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1347,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1404,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1426,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1446,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1288,7 +1475,12 @@ dumpRoleGUCPrivs(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1504,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1528,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1342,20 +1540,31 @@ dropTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1382,7 +1591,12 @@ dumpTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1665,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1482,7 +1701,12 @@ dropDBs(PGconn *conn)
"ORDER BY datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1721,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1532,7 +1764,18 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1785,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1608,10 +1855,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1875,48 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1933,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1959,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2007,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2020,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2030,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1807,7 +2143,18 @@ dumpTimestamp(const char *msg)
time_t now = time(NULL);
if (strftime(buf, sizeof(buf), PGDUMP_STRFTIME_FMT, localtime(&now)) != 0)
- fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "-- %s %s\n\n", msg, buf);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+ }
}
/*
@@ -1868,3 +2215,54 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..02176a77bd7 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,60 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -316,6 +350,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
@@ -347,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +516,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ {
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers, append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +622,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +650,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +682,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +699,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +735,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +841,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..56e89da1e5e
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:
```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \connect template1
^
Command was: \connect template1
pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.
It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.
Regards,
Vaibhav Dalvi
EnterpriseDB
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Show quoted text
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net>
wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had
two thoughts
- we could use invent a JSON format for the globals, or we could
just use
the existing archive format. I think the archive format is pretty
flexible,
and should be able to accommodate this. The downside is it's not
humanly
readable. The upside is that we don't need to do anything special
either to
write it or parse it.
I would first try to use the existing archiver API, because that
makes it
harder to miss bugs. Any tension between that API and pg_dumpall
is likely to
have corresponding tension on the pg_restore side. Resolving that
tension
will reveal much of the project's scope that remained hidden
during the v18
attempt. Perhaps more important than that, using the archiver API
means
future pg_dump and pg_restore options are more likely to cooperate
properly
with $SUBJECT. In other words, I want it to be hard to add
pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The
strength of the
archiver architecture shows in how rarely new features need
format-specific
logic and how rarely format-specific bugs get reported. We've had
little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql
commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This
is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>
wrote:
Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDB
Thanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should
restore these commands. Attached patch is fixing the same.
If we dump and restore the same file with the same user, then we will get
an error of ROLE CREATE as the same role is already created. I think,
either we can ignore this error, or we can keep it as a restore can be done
with different users.
mst@localhost bin]$ ./pg_restore d1 -C -d postgres
pg_restore: error: could not execute query: ERROR: role "mst" already
exists
Command was: CREATE ROLE mst;
ALTER ROLE mst WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION BYPASSRLS;pg_restore: warning: errors ignored on restore: 1
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net>
wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had
two thoughts
- we could use invent a JSON format for the globals, or we could
just use
the existing archive format. I think the archive format is
pretty flexible,
and should be able to accommodate this. The downside is it's not
humanly
readable. The upside is that we don't need to do anything
special either to
write it or parse it.
I would first try to use the existing archiver API, because that
makes it
harder to miss bugs. Any tension between that API and
pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving
that tension
will reveal much of the project's scope that remained hidden
during the v18
attempt. Perhaps more important than that, using the archiver
API means
future pg_dump and pg_restore options are more likely to
cooperate properly
with $SUBJECT. In other words, I want it to be hard to add
pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The
strength of the
archiver architecture shows in how rarely new features need
format-specific
logic and how rarely format-specific bugs get reported. We've
had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql
commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part
of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback.
This is
a WIP patch. I will do some more code cleanup and will add some
more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and
feedback.
Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v05_03112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v05_03112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 1bec0089809f9ba04b95b993caeefff068326c2d Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Mon, 3 Nov 2025 17:17:09 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v05
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 31 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 608 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 593 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 10 +
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
14 files changed, 1666 insertions(+), 146 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..e4cfa9a963a 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,17 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for RESTRICT, UNRESTRICT, CONNECT. */
+ if (!ropt->filename && te && te->tag &&
+ ((strcmp(te->tag, "RESTRICT") == 0) ||
+ (strcmp(te->tag, "UNRESTRICT") == 0) ||
+ (strcmp(te->tag, "CONNECT") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1332,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1711,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1732,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 47913178a93..00ce946aab1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..601b9f9738e 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,9 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +128,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +160,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +210,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +222,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +262,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +275,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +293,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +335,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +451,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +526,35 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +604,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +638,115 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* create entry for restrict */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ appendPQExpBuffer(qry, "\\restrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "RESTRICT");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +790,51 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
+ else
+ {
+ /* create entry for unrestrict */
+ PQExpBuffer qry = createPQExpBuffer();
- if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ appendPQExpBuffer(qry, "\\unrestrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "UNRESTRICT");
+ destroyPQExpBuffer(qry);
+ }
- PQfinish(conn);
+ if (!globals_only && !roles_only && !tablespaces_only)
+ dumpDatabases(conn, archDumpFormat);
if (verbose)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +845,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +927,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -791,7 +949,12 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +962,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -889,7 +1058,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1167,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1178,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1088,7 +1263,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1347,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1404,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1426,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1446,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1288,7 +1475,12 @@ dumpRoleGUCPrivs(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1504,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1528,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1342,20 +1540,31 @@ dropTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1382,7 +1591,12 @@ dumpTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1665,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1482,7 +1701,12 @@ dropDBs(PGconn *conn)
"ORDER BY datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1721,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1532,7 +1764,18 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1785,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1608,10 +1855,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1875,48 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1933,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1959,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2007,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2020,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2030,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1807,7 +2143,18 @@ dumpTimestamp(const char *msg)
time_t now = time(NULL);
if (strftime(buf, sizeof(buf), PGDUMP_STRFTIME_FMT, localtime(&now)) != 0)
- fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "-- %s %s\n\n", msg, buf);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+ }
}
/*
@@ -1868,3 +2215,54 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..02176a77bd7 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,60 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -316,6 +350,9 @@ main(int argc, char **argv)
exit(1);
opts->exit_on_error = true;
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
@@ -347,6 +384,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +516,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ {
+ n_errors = restore_one_database(inputFileSpec, opts, numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers, append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +622,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +650,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +682,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +699,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +735,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +841,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is only global.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..56e89da1e5e
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
Hi Mahendra,
I have a few more review comments regarding the patch:
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
```
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
```
When I re-added `PQfinish(conn);`, the regression tests passed successfully.
The `git diff` shows:
```
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index f44a8a45fca..d55d53dbeea 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,6 +287,7 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
+ PQfinish(conn);
exit_nicely(1);
}
```
If this change is intentional, could you please add a test case to justify
or demonstrate the need for it?
2. Please remove the extra blank line before `static void usage(const char
*progname);`.
```
+
static void usage(const char *progname);
```
3. There is an unnecessary line deletion that does not appear to be related
to this feature:
```
opts->cparams.pghost = pg_strdup(optarg);
break;
-
```
Could this deletion be part of a separate cleanup?
Regards,
Vaibhav Dalvi
On Mon, Nov 3, 2025 at 12:05 PM Vaibhav Dalvi <
vaibhav.dalvi@enterprisedb.com> wrote:
Show quoted text
Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or near
"\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBOn Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net>
wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had
two thoughts
- we could use invent a JSON format for the globals, or we could
just use
the existing archive format. I think the archive format is pretty
flexible,
and should be able to accommodate this. The downside is it's not
humanly
readable. The upside is that we don't need to do anything special
either to
write it or parse it.
I would first try to use the existing archiver API, because that
makes it
harder to miss bugs. Any tension between that API and pg_dumpall
is likely to
have corresponding tension on the pg_restore side. Resolving
that tension
will reveal much of the project's scope that remained hidden
during the v18
attempt. Perhaps more important than that, using the archiver
API means
future pg_dump and pg_restore options are more likely to
cooperate properly
with $SUBJECT. In other words, I want it to be hard to add
pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The
strength of the
archiver architecture shows in how rarely new features need
format-specific
logic and how rarely format-specific bugs get reported. We've
had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql
commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing partof
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This
is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing andfeedback.
Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Mon, Nov 3, 2025 at 5:25 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>
wrote:Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at ornear "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should
restore these commands. Attached patch is fixing the same.Thanks Mahendra, I am getting a segmentation fault against v05 patch.
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft --file a.3 -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '',
false);
Segmentation fault
Issue is coming with all output file formats -F[t/c/d] except plain
regards,
On 2025-11-04 Tu 7:53 AM, tushar wrote:
On Mon, Nov 3, 2025 at 5:25 PM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax errorat or near "\\"
LINE 1: \restrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error
at or near "\\"
LINE 1: \unrestrict
aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error
at or near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error
at or near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we
should restore these commands. Attached patch is fixing the same.Thanks Mahendra, I am getting a segmentation fault against v05 patch.
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft --file a.3 -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '',
false);
Segmentation faultIssue is coming with all output file formats -F[t/c/d] except plain
Yeah, I don't think we need to dump the timestamp in non-text modes.
This fix worked for me:
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 601b9f9738e..f66cc26d9a2 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -638,7 +638,7 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
/* create a archive file for global commands. */
@@ -2258,6 +2258,7 @@ createDumpId(void)
static void
createOneArchiveEntry(const char *query, const char *tag)
{
+ Assert(fout != NULL);
ArchiveEntry(fout,
nilCatalogId, /* catalog ID */
createDumpId(), /* dump ID */
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Hi Mahendra,
Thank you for the fix. Please find my further review comments below.
### Restrict-Key Option
The `--restrict-key` option is currently being accepted by
`pg_dumpall` even when non-plain formats are specified,
which contradicts its intended use only with the plain format.
For example:
```
$ ./db/bin/pg_dump --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dump: error: option --restrict-key can only be used with --format=plain
$ ./db/bin/pg_dumpall --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dumpall: error: invalid restrict key
```
I have attached a delta patch that addresses the issue with the
`--restrict-key` option. It would be beneficial to include a dedicated
test case for this check.
### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:
```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
```
In my attached delta file, I have replaced the unnecessary
`restrict_key` variable with `dopt.restrict_key`.
### Cosmetic Issues
1. Please review the spacing around the pointer:
```c
+ ((ArchiveHandle * )fout) ->connection = conn;
+ ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
```
2. Please be consistent with the punctuation of single-line comments;
some end with a full stop (`.`) and others do not.
3. In the SGML documentation changes, some new statements start
with one space, and others start with two. Please adhere to a single
standard for indentation across the patch.
Regards,
Vaibhav
EnterpriseDB
On Mon, Nov 3, 2025 at 5:24 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Show quoted text
On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>
wrote:Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at ornear "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should
restore these commands. Attached patch is fixing the same.If we dump and restore the same file with the same user, then we will get
an error of ROLE CREATE as the same role is already created. I think,
either we can ignore this error, or we can keep it as a restore can be done
with different users.mst@localhost bin]$ ./pg_restore d1 -C -d postgres
pg_restore: error: could not execute query: ERROR: role "mst" already
exists
Command was: CREATE ROLE mst;
ALTER ROLE mst WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION BYPASSRLS;pg_restore: warning: errors ignored on restore: 1
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net>
wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had
two thoughts
- we could use invent a JSON format for the globals, or we
could just use
the existing archive format. I think the archive format is
pretty flexible,
and should be able to accommodate this. The downside is it's
not humanly
readable. The upside is that we don't need to do anything
special either to
write it or parse it.
I would first try to use the existing archiver API, because
that makes it
harder to miss bugs. Any tension between that API and
pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving
that tension
will reveal much of the project's scope that remained hidden
during the v18
attempt. Perhaps more important than that, using the archiver
API means
future pg_dump and pg_restore options are more likely to
cooperate properly
with $SUBJECT. In other words, I want it to be hard to add
pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The
strength of the
archiver architecture shows in how rarely new features need
format-specific
logic and how rarely format-specific bugs get reported. We've
had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql
commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and basedon
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsingpart of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback.
This is
a WIP patch. I will do some more code cleanup and will add some
more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing andfeedback.
Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
delta-v05-non-text-modes-for-pg_dumpall.patchapplication/octet-stream; name=delta-v05-non-text-modes-for-pg_dumpall.patchDownload
From 4debd839b5dbbfd188ad2422112f8e303c5d7a71 Mon Sep 17 00:00:00 2001
From: Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>
Date: Wed, 5 Nov 2025 06:22:00 +0000
Subject: [PATCH v1 1/1] delta v05 non-text modes for pg_dumpall
This delta patch is to fix --restrict-key
with non-text dump format.
Vaibhav Dalvi
---
src/bin/pg_dump/pg_dumpall.c | 52 +++++++++++++-----------------------
1 file changed, 19 insertions(+), 33 deletions(-)
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 601b9f9738e..9e447dc9738 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -127,7 +127,6 @@ static char *filename = NULL;
static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
-static char *restrict_key;
static Archive *fout = NULL;
static pg_compress_specification compression_spec = {0};
static int dumpIdVal = 0;
@@ -397,7 +396,7 @@ main(int argc, char *argv[])
break;
case 9:
- restrict_key = pg_strdup(optarg);
+ dopt.restrict_key = pg_strdup(optarg);
appendPQExpBufferStr(pgdumpopts, " --restrict-key ");
appendShellString(pgdumpopts, optarg);
break;
@@ -555,15 +554,20 @@ main(int argc, char *argv[])
else
OPF = stdout;
- /*
- * If you don't provide a restrict key, one will be appointed for you.
- */
- if (!restrict_key)
- restrict_key = generate_restrict_key();
- if (!restrict_key)
- pg_fatal("could not generate restrict key");
- if (!valid_restrict_key(restrict_key))
- pg_fatal("invalid restrict key");
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * If you don't provide a restrict key, one will be appointed for you.
+ */
+ if (!dopt.restrict_key)
+ dopt.restrict_key = generate_restrict_key();
+ if (!dopt.restrict_key)
+ pg_fatal("could not generate restrict key");
+ if (!valid_restrict_key(dopt.restrict_key))
+ pg_fatal("invalid restrict key");
+ }
+ else if (dopt.restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
/*
* If there was a database specified on the command line, use that,
@@ -670,15 +674,6 @@ main(int argc, char *argv[])
createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
- /* create entry for restrict */
- {
- PQExpBuffer qry = createPQExpBuffer();
-
- appendPQExpBuffer(qry, "\\restrict %s\n\n", restrict_key);
- createOneArchiveEntry(qry->data, "RESTRICT");
- destroyPQExpBuffer(qry);
- }
-
/* default_transaction_read_only = off */
{
PQExpBuffer qry = createPQExpBuffer();
@@ -727,7 +722,7 @@ main(int argc, char *argv[])
* meta-commands so that the client machine that runs psql with the dump
* output remains unaffected.
*/
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ fprintf(OPF, "\\restrict %s\n\n", dopt.restrict_key);
/*
* We used to emit \connect postgres here, but that served no purpose
@@ -793,19 +788,10 @@ main(int argc, char *argv[])
if (archDumpFormat == archNull)
{
/*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
+ * Exit restricted mode just before dumping the databases. pg_dump
+ * will handle entering restricted mode again as appropriate.
*/
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
- }
- else
- {
- /* create entry for unrestrict */
- PQExpBuffer qry = createPQExpBuffer();
-
- appendPQExpBuffer(qry, "\\unrestrict %s\n\n", restrict_key);
- createOneArchiveEntry(qry->data, "UNRESTRICT");
- destroyPQExpBuffer(qry);
+ fprintf(OPF, "\\unrestrict %s\n\n", dopt.restrict_key);
}
if (!globals_only && !roles_only && !tablespaces_only)
--
2.43.0
Hi Mahendra,
Here are a few more comments following my review of the patch:
### 1\. Incorrect Comment for `-g` (globals-only) Option
The comment for the `-g` case in the code states that it restores the
`global.dat` file. However, in the non-text dump output, I only see the
following files: `databases`, `map.dat`, and `toc.dat`.
```c
+ case 'g':
+ /* restore only global.dat file from directory */
+ globals_only = true;
+ break;
```
Please update this comment to accurately reflect the file being restored
(e.g., `toc.dat` or the global objects within the archive).
### 2\. Incorrect Order of `case` Statements in `pg_restore.c`
The new `case 7` statement in `pg_restore.c` appears to be
inserted before `case 6`, disrupting the numerical order.
```c
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
case 6:
opts->restrict_key = pg_strdup(optarg);
```
Please re-order the `case` statements so they follow ascending
numerical order.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory
format).
It would be beneficial to include a clear example for this new feature.
### 4\. Cosmetic Issues
Please address the following minor stylistic points:
Please ensure the function signatures
adhere to standard coding style, particularly for line wrapping.
The following lines seem to have inconsistent indentation:
```c
static int restore_global_objects(const char *inputFileSpec, RestoreOptions
*opts,
int numWorkers, bool append_data, int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
```
Please fix instances where the 80-character line limit is
crossed, such as in the example below:
```c
n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1,
false);
```
I believe this concludes my formal review.
Thanks,
Vaibhav Dalvi
On Wed, Nov 5, 2025 at 12:29 PM Vaibhav Dalvi <
vaibhav.dalvi@enterprisedb.com> wrote:
Show quoted text
Hi Mahendra,
Thank you for the fix. Please find my further review comments below.
### Restrict-Key Option
The `--restrict-key` option is currently being accepted by
`pg_dumpall` even when non-plain formats are specified,
which contradicts its intended use only with the plain format.For example:
```
$ ./db/bin/pg_dump --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dump: error: option --restrict-key can only be used with --format=plain
$ ./db/bin/pg_dumpall --format=d -f testdump_dir
--restrict-key=RESTRICT_KEY
pg_dumpall: error: invalid restrict key
```I have attached a delta patch that addresses the issue with the
`--restrict-key` option. It would be beneficial to include a dedicated
test case for this check.### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
```In my attached delta file, I have replaced the unnecessary
`restrict_key` variable with `dopt.restrict_key`.### Cosmetic Issues
1. Please review the spacing around the pointer: ```c + ((ArchiveHandle * )fout) ->connection = conn; + ((ArchiveHandle * ) fout) -> public.numWorkers = 1; ``` 2. Please be consistent with the punctuation of single-line comments; some end with a full stop (`.`) and others do not. 3. In the SGML documentation changes, some new statements start with one space, and others start with two. Please adhere to a single standard for indentation across the patch.Regards,
Vaibhav
EnterpriseDBOn Mon, Nov 3, 2025 at 5:24 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <
vaibhav.dalvi@enterprisedb.com> wrote:Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at ornear "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrictaO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hb
pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or
near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should
restore these commands. Attached patch is fixing the same.If we dump and restore the same file with the same user, then we will get
an error of ROLE CREATE as the same role is already created. I think,
either we can ignore this error, or we can keep it as a restore can be done
with different users.mst@localhost bin]$ ./pg_restore d1 -C -d postgres
pg_restore: error: could not execute query: ERROR: role "mst" already
exists
Command was: CREATE ROLE mst;
ALTER ROLE mst WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION BYPASSRLS;pg_restore: warning: errors ignored on restore: 1
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <
andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I
had two thoughts
- we could use invent a JSON format for the globals, or we
could just use
the existing archive format. I think the archive format is
pretty flexible,
and should be able to accommodate this. The downside is it's
not humanly
readable. The upside is that we don't need to do anything
special either to
write it or parse it.
I would first try to use the existing archiver API, because
that makes it
harder to miss bugs. Any tension between that API and
pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving
that tension
will reveal much of the project's scope that remained hidden
during the v18
attempt. Perhaps more important than that, using the archiver
API means
future pg_dump and pg_restore options are more likely to
cooperate properly
with $SUBJECT. In other words, I want it to be hard to add
pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The
strength of the
archiver architecture shows in how rarely new features need
format-specific
logic and how rarely format-specific bugs get reported. We've
had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql
commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and basedon
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsingpart of
the global.dat file and we are able to skip DROP DATABASE with
the
globals-only option.
Here, I am attaching a patch for review, testing and feedback.
This is
a WIP patch. I will do some more code cleanup and will add some
more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing andfeedback.
Hi,
Here, I am attaching an updated patch. In offline discussion,Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Thanks Vaibhav, Tushar and Andrew for the review and testing.
On Mon, 3 Nov 2025 at 17:30, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
I have a few more review comments regarding the patch:
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
``` --- a/src/bin/pg_dump/connectdb.c +++ b/src/bin/pg_dump/connectdb.c
Yes, we need this. If there is any error, then we were trying to
disconnect the database in 2 places so we were getting a crash. I will
try to reproduce crashe without this patch and will respond.
On Tue, 4 Nov 2025 at 18:23, tushar <tushar.ahuja@enterprisedb.com> wrote:
Thanks Mahendra, I am getting a segmentation fault against v05 patch.
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft --file a.3 -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
Segmentation faultIssue is coming with all output file formats -F[t/c/d] except plain
regards,
Thanks for the report. Fixed,
On Tue, 4 Nov 2025 at 22:25, Andrew Dunstan <andrew@dunslane.net> wrote:
Yeah, I don't think we need to dump the timestamp in non-text modes. This fix worked for me:
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 601b9f9738e..f66cc26d9a2 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -638,7 +638,7 @@ main(int argc, char *argv[]) if (quote_all_identifiers) executeCommand(conn, "SET quote_all_identifiers = true");- if (verbose) + if (verbose && archDumpFormat == archNull) dumpTimestamp("Started on");
Thanks Andrew. Yes, we should not dump timestamp in non-text modes.
On Wed, 5 Nov 2025 at 18:47, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Here are a few more comments following my review of the patch:
### 1\. Incorrect Comment for `-g` (globals-only) Option
The comment for the `-g` case in the code states that it restores the
`global.dat` file. However, in the non-text dump output, I only see the
following files: `databases`, `map.dat`, and `toc.dat`.
Fixed.
```c + case 'g': + /* restore only global.dat file from directory */ + globals_only = true; + break;
Fixed.
```
Please update this comment to accurately reflect the file being restored
(e.g., `toc.dat` or the global objects within the archive).
Fixed.
### 2\. Incorrect Order of `case` Statements in `pg_restore.c`
The new `case 7` statement in `pg_restore.c` appears to be
inserted before `case 6`, disrupting the numerical order.```c + case 7: /* database patterns to skip */ + simple_string_list_append(&db_exclude_patterns, optarg); + break;case 6:
opts->restrict_key = pg_strdup(optarg);
```Please re-order the `case` statements so they follow ascending
numerical order.
Fixed.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory format).
It would be beneficial to include a clear example for this new feature.
I think we don't add such cases in doc. We already added test cases in
code. If others also feel that we should add a test case in SGML, then
I will update the doc with the test case.
### 4\. Cosmetic Issues
Please address the following minor stylistic points:
Please ensure the function signatures
adhere to standard coding style, particularly for line wrapping.
The following lines seem to have inconsistent indentation:```c
static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
```Please fix instances where the 80-character line limit is
crossed, such as in the example below:
Fixed.
```c
n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
```I believe this concludes my formal review.
Thanks,
Vaibhav DalviOn Wed, Nov 5, 2025 at 12:29 PM Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Thank you for the fix. Please find my further review comments below.
### Restrict-Key Option
The `--restrict-key` option is currently being accepted by
`pg_dumpall` even when non-plain formats are specified,
which contradicts its intended use only with the plain format.For example:
```
$ ./db/bin/pg_dump --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dump: error: option --restrict-key can only be used with --format=plain
$ ./db/bin/pg_dumpall --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dumpall: error: invalid restrict key
```I have attached a delta patch that addresses the issue with the
`--restrict-key` option. It would be beneficial to include a dedicated
test case for this check.
We should dump restrict-key with all modes as we need to restore with
the "-f file" option in text mode.
Ex: pg_dumpall --format=d -f testdump_dir
and restore::: pg_restore testdump_dir -d dabasename -C -f testdumpfile
(In testdumpfile, we will generate commands from archive dump)
So I am not merging this delat patch.
### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
output_clean is not added by this patch. I will analyse this comment
and will respond in the next update.
```
In my attached delta file, I have replaced the unnecessary
`restrict_key` variable with `dopt.restrict_key`.
This is also not part of this patch. If you feel to add this in DOPT,
please suggest in separate thread.
### Cosmetic Issues
1. Please review the spacing around the pointer: ```c + ((ArchiveHandle * )fout) ->connection = conn; + ((ArchiveHandle * ) fout) -> public.numWorkers = 1;
Fixed.
```
2. Please be consistent with the punctuation of single-line comments;
some end with a full stop (`.`) and others do not.
Based on nearby code comments, I made changes. I will try to fix these
inconsistencies..
3. In the SGML documentation changes, some new statements start
with one space, and others start with two. Please adhere to a single
standard for indentation across the patch.
Okay. I will fix these.
Regards,
Vaibhav
EnterpriseDBOn Mon, Nov 3, 2025 at 5:24 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should restore these commands. Attached patch is fixing the same.If we dump and restore the same file with the same user, then we will get an error of ROLE CREATE as the same role is already created. I think, either we can ignore this error, or we can keep it as a restore can be done with different users.
mst@localhost bin]$ ./pg_restore d1 -C -d postgres
pg_restore: error: could not execute query: ERROR: role "mst" already exists
Command was: CREATE ROLE mst;
ALTER ROLE mst WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;pg_restore: warning: errors ignored on restore: 1
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v06_06112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v06_06112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 352b7abe680ac767499ea3bfca07f86b9f0637fb Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Thu, 6 Nov 2025 10:33:16 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v06
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 31 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 621 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 595 ++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 10 +
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
14 files changed, 1680 insertions(+), 147 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..f5df9ac5c2b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,17 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for RESTRICT, UNRESTRICT, CONNECT. */
+ if (!ropt->filename && te && te->tag &&
+ ((strcmp(te->tag, "RESTRICT") == 0) ||
+ (strcmp(te->tag, "UNRESTRICT") == 0) ||
+ (strcmp(te->tag, "CONNECT") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1332,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1711,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1732,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a00918bacb4..13e1764ec70 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..5b9144aa002 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,9 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +128,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +160,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +210,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +222,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +262,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +275,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +293,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +335,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +451,21 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +526,35 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +604,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +638,114 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* create entry for restrict */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\restrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "RESTRICT");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +789,51 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
+ else
+ {
+ /* create entry for unrestrict */
+ PQExpBuffer qry = createPQExpBuffer();
- if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ appendPQExpBuffer(qry, "\\unrestrict %s\n\n", restrict_key);
+ createOneArchiveEntry(qry->data, "UNRESTRICT");
+ destroyPQExpBuffer(qry);
+ }
- PQfinish(conn);
+ if (!globals_only && !roles_only && !tablespaces_only)
+ dumpDatabases(conn, archDumpFormat);
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +844,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +926,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -791,7 +948,12 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +961,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -889,7 +1057,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1166,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1177,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1088,7 +1262,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1346,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1403,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1425,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1445,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1288,7 +1474,12 @@ dumpRoleGUCPrivs(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1503,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1527,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1342,20 +1539,31 @@ dropTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1382,7 +1590,12 @@ dumpTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1664,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1482,7 +1700,12 @@ dropDBs(PGconn *conn)
"ORDER BY datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1720,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1532,7 +1763,18 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1784,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1608,10 +1854,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1874,48 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1932,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1958,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +2006,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2019,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2029,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1807,7 +2142,18 @@ dumpTimestamp(const char *msg)
time_t now = time(NULL);
if (strftime(buf, sizeof(buf), PGDUMP_STRFTIME_FMT, localtime(&now)) != 0)
- fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "-- %s %s\n\n", msg, buf);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+ }
}
/*
@@ -1868,3 +2214,66 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..97a6bcb6d31 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,61 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +518,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers,
+ append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +624,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +652,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +684,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +701,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +737,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +843,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..56e89da1e5e
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,8 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Vaibhav, Tushar and Andrew for the review and testing.
On Mon, 3 Nov 2025 at 17:30, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:Hi Mahendra,
I have a few more review comments regarding the patch:
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
``` --- a/src/bin/pg_dump/connectdb.c +++ b/src/bin/pg_dump/connectdb.cYes, we need this. If there is any error, then we were trying to
disconnect the database in 2 places so we were getting a crash. I will
try to reproduce crashe without this patch and will respond.On Tue, 4 Nov 2025 at 18:23, tushar <tushar.ahuja@enterprisedb.com> wrote:
Thanks Mahendra, I am getting a segmentation fault against v05 patch.
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft --file a.3 -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
Segmentation faultIssue is coming with all output file formats -F[t/c/d] except plain
regards,
Thanks for the report. Fixed,
On Tue, 4 Nov 2025 at 22:25, Andrew Dunstan <andrew@dunslane.net> wrote:
Yeah, I don't think we need to dump the timestamp in non-text modes. This fix worked for me:
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c index 601b9f9738e..f66cc26d9a2 100644 --- a/src/bin/pg_dump/pg_dumpall.c +++ b/src/bin/pg_dump/pg_dumpall.c @@ -638,7 +638,7 @@ main(int argc, char *argv[]) if (quote_all_identifiers) executeCommand(conn, "SET quote_all_identifiers = true");- if (verbose) + if (verbose && archDumpFormat == archNull) dumpTimestamp("Started on");Thanks Andrew. Yes, we should not dump timestamp in non-text modes.
On Wed, 5 Nov 2025 at 18:47, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:Hi Mahendra,
Here are a few more comments following my review of the patch:
### 1\. Incorrect Comment for `-g` (globals-only) Option
The comment for the `-g` case in the code states that it restores the
`global.dat` file. However, in the non-text dump output, I only see the
following files: `databases`, `map.dat`, and `toc.dat`.Fixed.
```c + case 'g': + /* restore only global.dat file from directory */ + globals_only = true; + break;Fixed.
```
Please update this comment to accurately reflect the file being restored
(e.g., `toc.dat` or the global objects within the archive).Fixed.
### 2\. Incorrect Order of `case` Statements in `pg_restore.c`
The new `case 7` statement in `pg_restore.c` appears to be
inserted before `case 6`, disrupting the numerical order.```c + case 7: /* database patterns to skip */ + simple_string_list_append(&db_exclude_patterns, optarg); + break;case 6:
opts->restrict_key = pg_strdup(optarg);
```Please re-order the `case` statements so they follow ascending
numerical order.Fixed.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory format).
It would be beneficial to include a clear example for this new feature.I think we don't add such cases in doc. We already added test cases in
code. If others also feel that we should add a test case in SGML, then
I will update the doc with the test case.### 4\. Cosmetic Issues
Please address the following minor stylistic points:
Please ensure the function signatures
adhere to standard coding style, particularly for line wrapping.
The following lines seem to have inconsistent indentation:```c
static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
```Please fix instances where the 80-character line limit is
crossed, such as in the example below:Fixed.
```c
n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
```I believe this concludes my formal review.
Thanks,
Vaibhav DalviOn Wed, Nov 5, 2025 at 12:29 PM Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Thank you for the fix. Please find my further review comments below.
### Restrict-Key Option
The `--restrict-key` option is currently being accepted by
`pg_dumpall` even when non-plain formats are specified,
which contradicts its intended use only with the plain format.For example:
```
$ ./db/bin/pg_dump --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dump: error: option --restrict-key can only be used with --format=plain
$ ./db/bin/pg_dumpall --format=d -f testdump_dir --restrict-key=RESTRICT_KEY
pg_dumpall: error: invalid restrict key
```I have attached a delta patch that addresses the issue with the
`--restrict-key` option. It would be beneficial to include a dedicated
test case for this check.We should dump restrict-key with all modes as we need to restore with
the "-f file" option in text mode.
Ex: pg_dumpall --format=d -f testdump_dir
and restore::: pg_restore testdump_dir -d dabasename -C -f testdumpfile
(In testdumpfile, we will generate commands from archive dump)So I am not merging this delat patch.
### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;output_clean is not added by this patch. I will analyse this comment
and will respond in the next update.```
In my attached delta file, I have replaced the unnecessary
`restrict_key` variable with `dopt.restrict_key`.This is also not part of this patch. If you feel to add this in DOPT,
please suggest in separate thread.### Cosmetic Issues
1. Please review the spacing around the pointer: ```c + ((ArchiveHandle * )fout) ->connection = conn; + ((ArchiveHandle * ) fout) -> public.numWorkers = 1;Fixed.
```
2. Please be consistent with the punctuation of single-line comments;
some end with a full stop (`.`) and others do not.Based on nearby code comments, I made changes. I will try to fix these
inconsistencies..3. In the SGML documentation changes, some new statements start
with one space, and others start with two. Please adhere to a single
standard for indentation across the patch.Okay. I will fix these.
Regards,
Vaibhav
EnterpriseDBOn Mon, Nov 3, 2025 at 5:24 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Mon, 3 Nov 2025 at 12:06, Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Thank you for your work on this feature.
I have just begun reviewing the latest patch and
encountered the following errors during the initial setup:```
$ ./db/bin/pg_restore testdump_dir -C -d postgres -F d -p 5556
pg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj...
^
Command was: \restrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCj...
^
Command was: \unrestrict aO9K1gzVZTlafidF5fWx8ADGzUnIiAcguFz5qskGaFDygTCjCj9vg3Xxys1b3hbpg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \connect template1
^
Command was: \connect template1pg_restore: error: could not execute query: ERROR: syntax error at or near "\\"
LINE 1: \connect postgres
^
Command was: \connect postgres
```
To cross-check tried with plain dump(with pg_dumpall) and
restored(SQL file restore) without patch and didn't get above
connection errors.It appears there might be an issue with the dump file itself.
Please note that this is my first observation as I have just
started the review. I will continue with my assessment.Regards,
Vaibhav Dalvi
EnterpriseDBThanks Vaibhav for the review.
This change was added by me in v04. Only in the case of a file, we should restore these commands. Attached patch is fixing the same.If we dump and restore the same file with the same user, then we will get an error of ROLE CREATE as the same role is already created. I think, either we can ignore this error, or we can keep it as a restore can be done with different users.
mst@localhost bin]$ ./pg_restore d1 -C -d postgres
pg_restore: error: could not execute query: ERROR: role "mst" already exists
Command was: CREATE ROLE mst;
ALTER ROLE mst WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;pg_restore: warning: errors ignored on restore: 1
On Fri, Oct 31, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Tue, 28 Oct 2025 at 11:32, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 16 Oct 2025 at 16:24, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Wed, 15 Oct 2025 at 23:05, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Sun, 24 Aug 2025 at 22:12, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-08-23 Sa 9:08 PM, Noah Misch wrote:
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:
OK, now that's reverted we should discuss how to proceed. I had two thoughts
- we could use invent a JSON format for the globals, or we could just use
the existing archive format. I think the archive format is pretty flexible,
and should be able to accommodate this. The downside is it's not humanly
readable. The upside is that we don't need to do anything special either to
write it or parse it.I would first try to use the existing archiver API, because that makes it
harder to miss bugs. Any tension between that API and pg_dumpall is likely to
have corresponding tension on the pg_restore side. Resolving that tension
will reveal much of the project's scope that remained hidden during the v18
attempt. Perhaps more important than that, using the archiver API means
future pg_dump and pg_restore options are more likely to cooperate properly
with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore
features that malfunction only for $SUBJECT archives. The strength of the
archiver architecture shows in how rarely new features need format-specific
logic and how rarely format-specific bugs get reported. We've had little or
no trouble with e.g. bugs that appear in -Fd but not in -Fc.Yeah, that's what we're going to try.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.comThanks Andrew, Noah and all others for feedback.
Based on the above suggestions and discussions, I removed sql commands
from the global.dat file. For global commands, now we are making
toc.dat/toc.dmp/toc.tar file based on format specified and based on
format specified, we are making archive entries for these global
commands. By this approach, we removed the hard-coded parsing part of
the global.dat file and we are able to skip DROP DATABASE with the
globals-only option.Here, I am attaching a patch for review, testing and feedback. This is
a WIP patch. I will do some more code cleanup and will add some more
comments also. Please review this and let me know design level
feedback. Thanks Tushar Ahuja for some internal testing and feedback.Hi,
Here, I am attaching an updated patch. In offline discussion, Andrew
reported some test-case failures(Thanks Andrew). I fixed those.
Please let me know feedback for the patch.Hi,
Here I am attaching a re-based patch as v02 was failing on head.
Thanks Tushar for the testing.
Please review this and let me know feedback.Hi all,
Here I am attaching an updated patch for review and testing. Based on
some offline comments by Andrew, I did some code cleanup.
Please consider this patch for feedback.--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.comHere, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Hi,
Here, I am attaching an updated patch for the review and testing.
FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v07_11112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v07_11112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From cef022cee856e71a4a4a078ff3610eec90e1d805 Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Tue, 11 Nov 2025 11:25:34 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.dat/.dmp/.tar and map.dat. The
first contains commands restoring the global data based on -F, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat,
it restores the global settings from toc.dat/.dmp/.tar if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v07
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 31 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 607 ++++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 595 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 15 +
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
14 files changed, 1671 insertions(+), 147 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 59eaecb4ed7..f5df9ac5c2b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,17 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for RESTRICT, UNRESTRICT, CONNECT. */
+ if (!ropt->filename && te && te->tag &&
+ ((strcmp(te->tag, "RESTRICT") == 0) ||
+ (strcmp(te->tag, "UNRESTRICT") == 0) ||
+ (strcmp(te->tag, "CONNECT") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1332,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1711,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1732,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a00918bacb4..13e1764ec70 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..928ad7e5e0a 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,9 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +128,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +160,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +210,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +222,8 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
+ char global_path[MAXPGPATH];
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +262,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +275,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +293,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +335,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +451,25 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +530,35 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+
+ /* set file path for global sql commands. */
+ if (archDumpFormat == archCustom)
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", filename);
+ else if (archDumpFormat == archTar)
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", filename);
+ else if (archDumpFormat == archDirectory)
+ snprintf(global_path, MAXPGPATH, "%s", filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +608,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +642,105 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
if (verbose)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ /* Open the output file */
+ fout = CreateArchive(global_path, archDumpFormat, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +784,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, archDumpFormat);
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +830,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +912,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -791,7 +934,12 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +947,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -889,7 +1043,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1152,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1163,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1088,7 +1248,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1332,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1389,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1411,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1431,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1288,7 +1460,12 @@ dumpRoleGUCPrivs(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1489,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1513,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1342,20 +1525,31 @@ dropTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1382,7 +1576,12 @@ dumpTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1650,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1482,7 +1686,12 @@ dropDBs(PGconn *conn)
"ORDER BY datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1706,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1532,7 +1749,18 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1770,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1608,10 +1840,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1860,48 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1918,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1944,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +1992,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2005,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2015,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1807,7 +2128,18 @@ dumpTimestamp(const char *msg)
time_t now = time(NULL);
if (strftime(buf, sizeof(buf), PGDUMP_STRFTIME_FMT, localtime(&now)) != 0)
- fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "-- %s %s\n\n", msg, buf);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "-- %s %s\n\n", msg, buf);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+ }
}
/*
@@ -1868,3 +2200,66 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..97a6bcb6d31 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,61 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +518,105 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If map.dat file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "map.dat") ||
+ file_exists_in_directory(inputFileSpec, "toc.tar") ||
+ file_exists_in_directory(inputFileSpec, "toc.dmp")))
+ {
+ char global_path[MAXPGPATH];
+
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else /* process if map.dat file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ return restore_one_database(inputFileSpec, opts, numWorkers,
+ append_data, num, globals_only);
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +624,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +652,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +684,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +701,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +737,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +843,415 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Based on file, set path. */
+ if (file_exists_in_directory(inputFileSpec, "toc.tar"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.tar", inputFileSpec);
+ else if (file_exists_in_directory(inputFileSpec, "toc.dmp"))
+ snprintf(global_path, MAXPGPATH, "%s/toc.dmp", inputFileSpec);
+ else
+ snprintf(global_path, MAXPGPATH, "%s", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..e8f800a48c1
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -237,6 +237,12 @@ command_fails_like(
'pg_restore: options -C\/--create and -1\/--single-transaction cannot be used together'
);
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
# also fails for -r and -t, but it seems pointless to add more tests for those.
command_fails_like(
[ 'pg_dumpall', '--exclude-database=foo', '--globals-only' ],
@@ -244,4 +250,13 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
On 2025-11-11 Tu 12:59 AM, Mahendra Singh Thalor wrote:
Hi,
Here, I am attaching an updated patch for the review and testing.FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.
Regarding the name and format of the globals toc file, I'm inclined to
think we should always use custom format, regardless of whether the
individual databases will be in custom, tar or directory formats, and
that it should be called something distinguishable, e.g. toc.glo.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Tue, Nov 11, 2025 at 11:29 AM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:Thanks Vaibhav, Tushar and Andrew for the review and testing.
Thanks Mahendra, getting this error against v07 series patch
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft -f tar.dumpc -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '',
false);
pg_dumpall: pg_dumpall.c:2256: createOneArchiveEntry: Assertion `fout !=
((void *)0)' failed.
Aborted
regards,
Tushar Ahuja
EDB https://www.enterprisedb.com/
Thanks Andrew for the review.
On Tue, 11 Nov 2025 at 20:41, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-11-11 Tu 12:59 AM, Mahendra Singh Thalor wrote:
Hi,
Here, I am attaching an updated patch for the review and testing.FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.Regarding the name and format of the globals toc file, I'm inclined to
think we should always use custom format, regardless of whether the
individual databases will be in custom, tar or directory formats, and
that it should be called something distinguishable, e.g. toc.glo.
I also agree with your point. Fixed.
On Mon, 17 Nov 2025 at 19:38, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Tue, Nov 11, 2025 at 11:29 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Vaibhav, Tushar and Andrew for the review and testing.
Thanks Mahendra, getting this error against v07 series patch
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft -f tar.dumpc -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_dumpall: pg_dumpall.c:2256: createOneArchiveEntry: Assertion `fout != ((void *)0)' failed.
Abortedregards,
Thanks Tushar for the report. Fixed.
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v08_17112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v08_17112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From d70167443370f1396f0f78485f746742fb92a821 Mon Sep 17 00:00:00 2001
From: ThalorMahendra <mahi6run@gmail.com>
Date: Mon, 17 Nov 2025 22:35:35 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v08
---
doc/src/sgml/ref/pg_dumpall.sgml | 89 +++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 1 -
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 29 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 592 +++++++++++++++++++++-----
src/bin/pg_dump/pg_restore.c | 606 ++++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
14 files changed, 1677 insertions(+), 147 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 9f639f61db0..4063e88d388 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..f44a8a45fca 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -287,7 +287,6 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c84b017f21b..5b8dd295070 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,15 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1330,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1709,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1730,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a00918bacb4..13e1764ec70 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..0bf892b1fcc 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,10 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts,
+ char *dbfile, ArchiveFormat archDumpFormat);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,6 +78,9 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
@@ -123,6 +128,13 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static const CatalogId nilCatalogId = {0, 0};
+static ArchiveMode archiveMode = archModeWrite;
+static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +160,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +210,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +222,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +261,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -257,6 +274,7 @@ main(int argc, char *argv[])
case 'c':
output_clean = true;
+ dopt.outputClean = 1;
break;
case 'd':
@@ -274,7 +292,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ formatName = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +334,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -429,6 +450,25 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(formatName);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +529,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +599,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +633,110 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archiveMode, NULL, sync_method);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -659,27 +780,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, archDumpFormat);
- PQfinish(conn);
-
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +826,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +908,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -791,7 +930,12 @@ dropRoles(PGconn *conn)
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +943,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -889,7 +1039,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1148,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1159,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1088,7 +1244,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1328,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1385,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1407,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1427,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1288,7 +1456,12 @@ dumpRoleGUCPrivs(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1485,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1509,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1342,20 +1521,31 @@ dropTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1382,7 +1572,12 @@ dumpTablespaces(PGconn *conn)
"ORDER BY 1");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1646,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1482,7 +1682,12 @@ dropDBs(PGconn *conn)
"ORDER BY datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1702,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1532,7 +1745,18 @@ dumpUserConfig(PGconn *conn, const char *username)
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1766,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1608,10 +1836,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, ArchiveFormat archDumpFormat)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1856,48 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
"ORDER BY (datname <> 'template1'), datname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1914,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1940,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +1988,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +2001,8 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile,
+ ArchiveFormat archDumpFormat)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2011,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1868,3 +2185,66 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..ea9f64637ea 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,31 +41,61 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
#include "pg_backup_utils.h"
+
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers, bool append_data,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +518,121 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the
+ * databases from map.dat , but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
+ }
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ int nerror;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ append_data, num, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +640,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +668,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +700,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +717,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +753,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +859,410 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..083f5c5bf9d
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.39.3
Hi Mahendra,
Thanks Mahendra for working on this.
Looks like my previous comment below is not addressed:
1.
### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:
```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
```
I agree that the output_clean variable is not added by your patch
but the introduction of dopt by your patch makes it redundant because
dopt has dopt.outputClean. Please look at below code from pg_dump.c
for the reference:
case 'c': /* clean (i.e., drop) schema prior to create */
dopt.outputClean = 1;
break;
case 25:
dopt.restrict_key = pg_strdup(optarg);
break;
2.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory
format).
It would be beneficial to include a clear example for this new feature.
I think pg_dumpall should have separate examples similar to pg_dump
rather than referencing the pg_dump example because pg_dumpall
doesn't have to mention the database name without -l or --database
in the command.
3.
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
``` --- a/src/bin/pg_dump/connectdb.c +++ b/src/bin/pg_dump/connectdb.cYes, we need this. If there is any error, then we were trying to
disconnect the database in 2 places so we were getting a crash. I will
try to reproduce crashe without this patch and will respond.Have you added a test case in the regression suite which fails if we remove
this particular change and works well with the change? or if possible could
you please demonstrate here at least.
4. The variable name *append_data* doesn't look meaningful to me.
Instead we can use *append_database/**append_databases*?
because if this variable is set then we dump the databases along with
global objects. In case of pg_dump, append_data or data_only does make
sense to differentiate between schema and data but in case of pg_dumpall
if this variable is set then we're dumping schema as well as data i.e.
in-short
the databases.
------------------------------------ pg_dumpall.c
----------------------------------------
5. The variable name formatName doesn't follow the naming convention of
variables available around it. I think use of format_name/formatname would
be better.
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
------------------------------------ pg_restore.c
----------------------------------------
6. Fourth parameter (i.e. append_data) to function restore_global_objects()
is redundant.
All the time value provided by all callers to this parameter is false.
I would suggest removing this parameter and in the definition of this
function
call function restore_one_database() with false as 4th argument. Find diff
below:
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -64,8 +64,7 @@ static int restore_one_database(const char
*inputFileSpec, RestoreOptions *opts,
int
numWorkers, bool append_data, int num,
bool
globals_only);
static int restore_global_objects(const char *inputFileSpec,
- RestoreOptions *opts, int numWorkers, bool append_data,
- int num, bool globals_only);
+ RestoreOptions *opts, int numWorkers, int num, bool
globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts,
int numWorkers);
static int get_dbnames_list_to_restore(PGconn *conn,
@@ -554,7 +553,7 @@ main(int argc, char **argv)
/* Set path for toc.glo file. */
snprintf(global_path, MAXPGPATH, "%s/toc.glo",
inputFileSpec);
- n_errors = restore_global_objects(global_path,
opts, numWorkers, false, 0, globals_only);
+ n_errors = restore_global_objects(global_path,
opts, numWorkers, 0, globals_only);
pg_log_info("database restoring skipped because
option -g/--globals-only was specified");
}
@@ -602,7 +601,7 @@ main(int argc, char **argv)
* If globals_only is set, then skip DROP DATABASE commands from restore.
*/
static int restore_global_objects(const char *inputFileSpec,
RestoreOptions *opts,
- int numWorkers, bool append_data, int num, bool
globals_only)
+ int numWorkers, int num, bool globals_only)
{
int nerror;
int format = opts->format;
@@ -610,8 +609,8 @@ static int restore_global_objects(const char
*inputFileSpec, RestoreOptions *opt
/* Set format as custom so that toc.glo file can be read. */
opts->format = archCustom;
- nerror = restore_one_database(inputFileSpec, opts, numWorkers,
- append_data, num, globals_only);
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
false, num,
+
globals_only);
/* Reset format value. */
opts->format = format;
@@ -1097,7 +1096,7 @@ restore_all_databases(const char *inputFileSpec,
/* If map.dat has no entries, return after processing global
commands. */
if (dbname_oid_list.head == NULL)
- return restore_global_objects(global_path, opts,
numWorkers, false, 0, false);
+ return restore_global_objects(global_path, opts,
numWorkers, 0, false);
pg_log_info(ngettext("found %d database name in \"%s\"",
"found %d database names
in \"%s\"",
@@ -1151,7 +1150,7 @@ restore_all_databases(const char *inputFileSpec,
PQfinish(conn);
/* Open toc.dat file and execute/append all the global sql
commands. */
- n_errors_total = restore_global_objects(global_path, opts,
numWorkers, false, 0, false);
+ n_errors_total = restore_global_objects(global_path, opts,
numWorkers, 0, false);
Regression is successful with these changes.
7. Fix indentation:
static int restore_global_objects(const char *inputFileSpec,
RestoreOptions *opts, int numWorkers, bool append_data,
int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int
numWorkers);
8. Remove extra line:
+
static void usage(const char *progname);
9. Remove extra space after map.dat and before comma:
+ * databases from map.dat , but skip restoring those matching
10. Fix 80 char limits:
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1,
false);
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec,
&dbname_oid_list);
+ return restore_global_objects(global_path, opts, numWorkers, false, 0,
false);
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers,
false, 0, false);
+ pg_log_warning("errors ignored on database \"%s\" restore: %d",
dbidname->str, n_errors);
Regards,
Vaibhav
On Mon, Nov 17, 2025 at 10:45 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Show quoted text
Thanks Andrew for the review.
On Tue, 11 Nov 2025 at 20:41, Andrew Dunstan <andrew@dunslane.net> wrote:On 2025-11-11 Tu 12:59 AM, Mahendra Singh Thalor wrote:
Hi,
Here, I am attaching an updated patch for the review and testing.FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.Regarding the name and format of the globals toc file, I'm inclined to
think we should always use custom format, regardless of whether the
individual databases will be in custom, tar or directory formats, and
that it should be called something distinguishable, e.g. toc.glo.I also agree with your point. Fixed.
On Mon, 17 Nov 2025 at 19:38, tushar <tushar.ahuja@enterprisedb.com>
wrote:On Tue, Nov 11, 2025 at 11:29 AM Mahendra Singh Thalor <
mahi6run@gmail.com> wrote:
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Thanks Vaibhav, Tushar and Andrew for the review and testing.
Thanks Mahendra, getting this error against v07 series patch
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft -f tar.dumpc -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '',false);
pg_dumpall: pg_dumpall.c:2256: createOneArchiveEntry: Assertion `fout !=
((void *)0)' failed.
Aborted
regards,
Thanks Tushar for the report. Fixed.
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Thanks Vaibhav for the review.
On Tue, 18 Nov 2025 at 16:05, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:
Hi Mahendra,
Thanks Mahendra for working on this.
Looks like my previous comment below is not addressed:
1.### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:
```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
```
Fixed. output_clean was a global variable because it was used in 2
functions. Now I am passing dopt. output_clean as function argument
for another function.
I agree that the output_clean variable is not added by your patch
but the introduction of dopt by your patch makes it redundant because
dopt has dopt.outputClean. Please look at below code from pg_dump.c
for the reference:case 'c': /* clean (i.e., drop) schema prior to create */
dopt.outputClean = 1;
break;
case 25:
dopt.restrict_key = pg_strdup(optarg);
break;2.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory format).
It would be beneficial to include a clear example for this new feature.I think pg_dumpall should have separate examples similar to pg_dump
rather than referencing the pg_dump example because pg_dumpall
doesn't have to mention the database name without -l or --database
in the command.
Fixed. Added some examples.
3.
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
``` --- a/src/bin/pg_dump/connectdb.c +++ b/src/bin/pg_dump/connectdb.cYes, we need this. If there is any error, then we were trying to
disconnect the database in 2 places so we were getting a crash. I will
try to reproduce crashe without this patch and will respond.Have you added a test case in the regression suite which fails if we remove
this particular change and works well with the change? or if possible could
you please demonstrate here at least.
Fixed. With AH(archive), we should not free pointers by this exec call
as we free this by exit_nicely hook. (we register AH by
on_exit_close_archive).
4. The variable name append_data doesn't look meaningful to me.
Instead we can use append_database/append_databases?
because if this variable is set then we dump the databases along with
global objects. In case of pg_dump, append_data or data_only does make
sense to differentiate between schema and data but in case of pg_dumpall
if this variable is set then we're dumping schema as well as data i.e. in-short
the databases.
As of now, I am keeping this append_data as this was from an already
committed patch.
------------------------------------ pg_dumpall.c ----------------------------------------
5. The variable name formatName doesn't follow the naming convention of
variables available around it. I think use of format_name/formatname would
be better.char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
Fixed.
------------------------------------ pg_restore.c ----------------------------------------
6. Fourth parameter (i.e. append_data) to function restore_global_objects() is redundant.
All the time value provided by all callers to this parameter is false.I would suggest removing this parameter and in the definition of this function
call function restore_one_database() with false as 4th argument. Find diff below:
Fixed.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c @@ -64,8 +64,7 @@ static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts, int numWorkers, bool append_data, int num, bool globals_only); static int restore_global_objects(const char *inputFileSpec, - RestoreOptions *opts, int numWorkers, bool append_data, - int num, bool globals_only); + RestoreOptions *opts, int numWorkers, int num, bool globals_only); static int restore_all_databases(const char *inputFileSpec, SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers); static int get_dbnames_list_to_restore(PGconn *conn, @@ -554,7 +553,7 @@ main(int argc, char **argv)/* Set path for toc.glo file. */ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec); - n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only); + n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);pg_log_info("database restoring skipped because option -g/--globals-only was specified"); } @@ -602,7 +601,7 @@ main(int argc, char **argv) * If globals_only is set, then skip DROP DATABASE commands from restore. */ static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts, - int numWorkers, bool append_data, int num, bool globals_only) + int numWorkers, int num, bool globals_only) { int nerror; int format = opts->format; @@ -610,8 +609,8 @@ static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opt /* Set format as custom so that toc.glo file can be read. */ opts->format = archCustom;- nerror = restore_one_database(inputFileSpec, opts, numWorkers, - append_data, num, globals_only); + nerror = restore_one_database(inputFileSpec, opts, numWorkers, false, num, + globals_only);/* Reset format value. */
opts->format = format;
@@ -1097,7 +1096,7 @@ restore_all_databases(const char *inputFileSpec,/* If map.dat has no entries, return after processing global commands. */ if (dbname_oid_list.head == NULL) - return restore_global_objects(global_path, opts, numWorkers, false, 0, false); + return restore_global_objects(global_path, opts, numWorkers, 0, false);pg_log_info(ngettext("found %d database name in \"%s\"",
"found %d database names in \"%s\"",
@@ -1151,7 +1150,7 @@ restore_all_databases(const char *inputFileSpec,
PQfinish(conn);/* Open toc.dat file and execute/append all the global sql commands. */ - n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false); + n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);Regression is successful with these changes.
7. Fix indentation:
static int restore_global_objects(const char *inputFileSpec,
RestoreOptions *opts, int numWorkers, bool append_data,
int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
Fixed some.
8. Remove extra line:
+
static void usage(const char *progname);
Fixed.
9. Remove extra space after map.dat and before comma:
+ * databases from map.dat , but skip restoring those matching
Fixed.
10. Fix 80 char limits:
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
Fixed some.
I will do some more cleanup in the coming versions.
Here, I am attaching an updated patch for the review and testing.
Regards,
VaibhavOn Mon, Nov 17, 2025 at 10:45 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Andrew for the review.
On Tue, 11 Nov 2025 at 20:41, Andrew Dunstan <andrew@dunslane.net> wrote:On 2025-11-11 Tu 12:59 AM, Mahendra Singh Thalor wrote:
Hi,
Here, I am attaching an updated patch for the review and testing.FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.Regarding the name and format of the globals toc file, I'm inclined to
think we should always use custom format, regardless of whether the
individual databases will be in custom, tar or directory formats, and
that it should be called something distinguishable, e.g. toc.glo.I also agree with your point. Fixed.
On Mon, 17 Nov 2025 at 19:38, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Tue, Nov 11, 2025 at 11:29 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Vaibhav, Tushar and Andrew for the review and testing.
Thanks Mahendra, getting this error against v07 series patch
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft -f tar.dumpc -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_dumpall: pg_dumpall.c:2256: createOneArchiveEntry: Assertion `fout != ((void *)0)' failed.
Abortedregards,
Thanks Tushar for the report. Fixed.
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v09_27112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v09_27112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 71b9a213e7bb1b68e4d05b373516e0eca6337f38 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 27 Nov 2025 13:25:40 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v09
---
doc/src/sgml/ref/pg_dumpall.sgml | 104 ++++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 7 +-
src/bin/pg_dump/connectdb.h | 2 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 29 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 617 +++++++++++++++++++++------
src/bin/pg_dump/pg_restore.c | 605 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
15 files changed, 1705 insertions(+), 166 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8834b7ec141..75de1fee330 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
@@ -937,9 +1020,13 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
<title>Examples</title>
<para>
To dump all databases:
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput>
</screen>
</para>
@@ -956,6 +1043,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If dump was taken in non-text format, then use pg_restore to restore all databases.
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all the databases. If user don't want to restore some databases, then use
+ --exclude-pattern to skip those.
+</para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..d3e9e27003e 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -225,7 +225,7 @@ ConnectDatabase(const char *dbname, const char *connection_string,
exit_nicely(1);
}
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, false));
return conn;
}
@@ -275,7 +275,7 @@ constructConnStr(const char **keywords, const char **values)
* Run a query, return the results, exit program on failure.
*/
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
index 6c1e1954769..0b741b68cb1 100644
--- a/src/bin/pg_dump/connectdb.h
+++ b/src/bin/pg_dump/connectdb.h
@@ -22,5 +22,5 @@ extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version,
char *password, char *override_dbname);
-extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern PGresult *executeQuery(PGconn *conn, const char *query, bool is_archive);
#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c84b017f21b..5b8dd295070 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,15 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1330,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1709,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1730,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a00918bacb4..13e1764ec70 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..725365f6519 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,11 +77,13 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static const char *connstr = "";
-static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
static bool dosync = true;
@@ -123,6 +126,10 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +155,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +205,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +217,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +256,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -256,7 +268,7 @@ main(int argc, char *argv[])
break;
case 'c':
- output_clean = true;
+ dopt.outputClean = true;
break;
case 'd':
@@ -274,7 +286,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +328,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -419,7 +434,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (if_exists && !output_clean)
+ if (if_exists && !dopt.outputClean)
pg_fatal("option --if-exists requires option -c/--clean");
if (roles_only && tablespaces_only)
@@ -429,6 +444,25 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +523,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +593,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +627,110 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -625,7 +740,7 @@ main(int argc, char *argv[])
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
*/
- if (output_clean)
+ if (dopt.outputClean)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -659,27 +774,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, dopt.outputClean);
- PQfinish(conn);
-
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +820,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +902,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -786,12 +919,17 @@ dropRoles(PGconn *conn)
"FROM %s "
"ORDER BY 1", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +937,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -871,7 +1015,7 @@ dumpRoles(PGconn *conn)
"FROM %s "
"ORDER BY 2", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_oid = PQfnumber(res, "oid");
i_rolname = PQfnumber(res, "rolname");
@@ -889,7 +1033,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1142,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1153,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1076,7 +1226,7 @@ dumpRoleMembership(PGconn *conn)
"LEFT JOIN %s ug on ug.oid = a.grantor "
"WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')"
"ORDER BY 1,2,3", role_catalog, role_catalog, role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_role = PQfnumber(res, "role");
i_member = PQfnumber(res, "member");
i_grantor = PQfnumber(res, "grantor");
@@ -1088,7 +1238,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1322,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1379,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1401,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1421,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1285,10 +1447,15 @@ dumpRoleGUCPrivs(PGconn *conn)
"paracl, "
"pg_catalog.acldefault('p', " CppAsString2(BOOTSTRAP_SUPERUSERID) ") AS acldefault "
"FROM pg_catalog.pg_parameter_acl "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1479,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1503,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1339,23 +1512,34 @@ dropTablespaces(PGconn *conn)
res = executeQuery(conn, "SELECT spcname "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1379,10 +1563,15 @@ dumpTablespaces(PGconn *conn)
"pg_catalog.shobj_description(oid, 'pg_tablespace') "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1640,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1479,10 +1673,15 @@ dropDBs(PGconn *conn)
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY datname");
+ "ORDER BY datname", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1696,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1525,14 +1732,25 @@ dumpUserConfig(PGconn *conn, const char *username)
appendStringLiteralConn(buf, username, conn);
appendPQExpBufferChar(buf, ')');
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
if (PQntuples(res) > 0)
{
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1760,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1591,7 +1813,7 @@ expand_dbname_patterns(PGconn *conn,
exit_nicely(1);
}
- res = executeQuery(conn, query->data);
+ res = executeQuery(conn, query->data, fout ? true : false);
for (int i = 0; i < PQntuples(res); i++)
{
simple_string_list_append(names, PQgetvalue(res, i, 0));
@@ -1608,10 +1830,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool output_clean)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1850,49 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY (datname <> 'template1'), datname");
+ "ORDER BY (datname <> 'template1'), datname",
+ fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1909,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1935,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +1983,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +1996,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2005,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1766,7 +2077,7 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
PGresult *res;
buildShSecLabelQuery(catalog_name, objectId, sql);
- res = executeQuery(conn, sql->data);
+ res = executeQuery(conn, sql->data, fout ? true : false);
emitShSecLabels(conn, res, buffer, objtype, objname);
PQclear(res);
@@ -1868,3 +2179,67 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..18ea8869a97 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,12 +41,16 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -54,18 +58,43 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +355,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +385,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +517,121 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the
+ * databases from map.dat, but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
+ }
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, int num, bool globals_only)
+{
+ int nerror;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ false, num, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +639,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +667,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +699,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +716,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +752,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +858,410 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data, false);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ int count = 0;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+
+ count++;
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..083f5c5bf9d
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.47.3
On Thu, 27 Nov 2025 at 13:45, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Vaibhav for the review.
On Tue, 18 Nov 2025 at 16:05, Vaibhav Dalvi
<vaibhav.dalvi@enterprisedb.com> wrote:Hi Mahendra,
Thanks Mahendra for working on this.
Looks like my previous comment below is not addressed:
1.### Use of Dump Options Structure (dopt)
Please ensure consistency by utilizing the main dump options
structure (`dopt`) instead of declaring and using individual variables
where the structure already provides fields. For example, the
`output_clean` variable seems redundant here:
```c
case 'c':
output_clean = true;
dopt.outputClean = 1;
break;
```Fixed. output_clean was a global variable because it was used in 2
functions. Now I am passing dopt. output_clean as function argument
for another function.I agree that the output_clean variable is not added by your patch
but the introduction of dopt by your patch makes it redundant because
dopt has dopt.outputClean. Please look at below code from pg_dump.c
for the reference:case 'c': /* clean (i.e., drop) schema prior to create */
dopt.outputClean = 1;
break;
case 25:
dopt.restrict_key = pg_strdup(optarg);
break;2.
### 3\. Missing Example in SGML Documentation
The SGML documentation for `pg_dumpall` is missing an explicit
example demonstrating its use with non-text formats (e.g., directory format).
It would be beneficial to include a clear example for this new feature.I think pg_dumpall should have separate examples similar to pg_dump
rather than referencing the pg_dump example because pg_dumpall
doesn't have to mention the database name without -l or --database
in the command.Fixed. Added some examples.
3.
1. Is the following change in `src/bin/pg_dump/connectdb.c` intentional?
``` --- a/src/bin/pg_dump/connectdb.c +++ b/src/bin/pg_dump/connectdb.cYes, we need this. If there is any error, then we were trying to
disconnect the database in 2 places so we were getting a crash. I will
try to reproduce crashe without this patch and will respond.Have you added a test case in the regression suite which fails if we remove
this particular change and works well with the change? or if possible could
you please demonstrate here at least.Fixed. With AH(archive), we should not free pointers by this exec call
as we free this by exit_nicely hook. (we register AH by
on_exit_close_archive).4. The variable name append_data doesn't look meaningful to me.
Instead we can use append_database/append_databases?
because if this variable is set then we dump the databases along with
global objects. In case of pg_dump, append_data or data_only does make
sense to differentiate between schema and data but in case of pg_dumpall
if this variable is set then we're dumping schema as well as data i.e. in-short
the databases.As of now, I am keeping this append_data as this was from an already
committed patch.------------------------------------ pg_dumpall.c ----------------------------------------
5. The variable name formatName doesn't follow the naming convention of
variables available around it. I think use of format_name/formatname would
be better.char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *formatName = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;Fixed.
------------------------------------ pg_restore.c ----------------------------------------
6. Fourth parameter (i.e. append_data) to function restore_global_objects() is redundant.
All the time value provided by all callers to this parameter is false.I would suggest removing this parameter and in the definition of this function
call function restore_one_database() with false as 4th argument. Find diff below:Fixed.
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c @@ -64,8 +64,7 @@ static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts, int numWorkers, bool append_data, int num, bool globals_only); static int restore_global_objects(const char *inputFileSpec, - RestoreOptions *opts, int numWorkers, bool append_data, - int num, bool globals_only); + RestoreOptions *opts, int numWorkers, int num, bool globals_only); static int restore_all_databases(const char *inputFileSpec, SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers); static int get_dbnames_list_to_restore(PGconn *conn, @@ -554,7 +553,7 @@ main(int argc, char **argv)/* Set path for toc.glo file. */ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec); - n_errors = restore_global_objects(global_path, opts, numWorkers, false, 0, globals_only); + n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);pg_log_info("database restoring skipped because option -g/--globals-only was specified"); } @@ -602,7 +601,7 @@ main(int argc, char **argv) * If globals_only is set, then skip DROP DATABASE commands from restore. */ static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts, - int numWorkers, bool append_data, int num, bool globals_only) + int numWorkers, int num, bool globals_only) { int nerror; int format = opts->format; @@ -610,8 +609,8 @@ static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opt /* Set format as custom so that toc.glo file can be read. */ opts->format = archCustom;- nerror = restore_one_database(inputFileSpec, opts, numWorkers, - append_data, num, globals_only); + nerror = restore_one_database(inputFileSpec, opts, numWorkers, false, num, + globals_only);/* Reset format value. */
opts->format = format;
@@ -1097,7 +1096,7 @@ restore_all_databases(const char *inputFileSpec,/* If map.dat has no entries, return after processing global commands. */ if (dbname_oid_list.head == NULL) - return restore_global_objects(global_path, opts, numWorkers, false, 0, false); + return restore_global_objects(global_path, opts, numWorkers, 0, false);pg_log_info(ngettext("found %d database name in \"%s\"",
"found %d database names in \"%s\"",
@@ -1151,7 +1150,7 @@ restore_all_databases(const char *inputFileSpec,
PQfinish(conn);/* Open toc.dat file and execute/append all the global sql commands. */ - n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false); + n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);Regression is successful with these changes.
7. Fix indentation:
static int restore_global_objects(const char *inputFileSpec,
RestoreOptions *opts, int numWorkers, bool append_data,
int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);Fixed some.
8. Remove extra line:
+
static void usage(const char *progname);Fixed.
9. Remove extra space after map.dat and before comma:
+ * databases from map.dat , but skip restoring those matching
Fixed.
10. Fix 80 char limits:
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+ return restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false, 0, false);
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
Fixed some.
I will do some more cleanup in the coming versions.Here, I am attaching an updated patch for the review and testing.
Regards,
VaibhavOn Mon, Nov 17, 2025 at 10:45 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Andrew for the review.
On Tue, 11 Nov 2025 at 20:41, Andrew Dunstan <andrew@dunslane.net> wrote:On 2025-11-11 Tu 12:59 AM, Mahendra Singh Thalor wrote:
Hi,
Here, I am attaching an updated patch for the review and testing.FIX: as suggested by Vaibhav, added error for --restrict-key option
with non-text format.Regarding the name and format of the globals toc file, I'm inclined to
think we should always use custom format, regardless of whether the
individual databases will be in custom, tar or directory formats, and
that it should be called something distinguishable, e.g. toc.glo.I also agree with your point. Fixed.
On Mon, 17 Nov 2025 at 19:38, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Tue, Nov 11, 2025 at 11:29 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Thu, 6 Nov 2025 at 11:03, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Thanks Vaibhav, Tushar and Andrew for the review and testing.
Thanks Mahendra, getting this error against v07 series patch
[edb@1a1c15437e7c bin]$ ./pg_dumpall -Ft -f tar.dumpc -v
pg_dumpall: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_dumpall: pg_dumpall.c:2256: createOneArchiveEntry: Assertion `fout != ((void *)0)' failed.
Abortedregards,
Thanks Tushar for the report. Fixed.
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Hi,
CI was reporting an error for an unused variable.
[08:37:07.338] user 0m14.312s
[08:37:07.338] sys 0m9.155s
[08:37:07.338] make -s -j${BUILD_JOBS} clean
[08:37:07.850] time make -s -j${BUILD_JOBS} world-bin
[08:37:17.443] pg_restore.c:1080:8: error: variable 'count' set but
not used [-Werror,-Wunused-but-set-variable]
[08:37:17.443] 1080 | int count = 0;
[08:37:17.443] | ^
[08:37:17.443] 1 error generated.
[08:37:17.443] make[3]: *** [<builtin>: pg_restore.o] Error 1
[08:37:17.443] make[3]: *** Waiting for unfinished jobs....
[08:37:17.708] make[2]: *** [Makefile:45: all-pg_dump-recurse] Error 2
[08:37:17.709] make[1]: *** [Makefile:42: all-bin-recurse] Error 2
[08:37:17.709] mak
Fixed. Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v10_27112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v10_27112025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From ab50bb52bf2a97c248decfd5e5b3ca64e9e27d9d Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Thu, 27 Nov 2025 14:43:16 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v10
---
doc/src/sgml/ref/pg_dumpall.sgml | 104 ++++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 7 +-
src/bin/pg_dump/connectdb.h | 2 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 29 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 617 +++++++++++++++++++++------
src/bin/pg_dump/pg_restore.c | 602 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
15 files changed, 1702 insertions(+), 166 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8834b7ec141..75de1fee330 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
@@ -937,9 +1020,13 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
<title>Examples</title>
<para>
To dump all databases:
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput>
</screen>
</para>
@@ -956,6 +1043,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If dump was taken in non-text format, then use pg_restore to restore all databases.
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all the databases. If user don't want to restore some databases, then use
+ --exclude-pattern to skip those.
+</para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..d3e9e27003e 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -225,7 +225,7 @@ ConnectDatabase(const char *dbname, const char *connection_string,
exit_nicely(1);
}
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, false));
return conn;
}
@@ -275,7 +275,7 @@ constructConnStr(const char **keywords, const char **values)
* Run a query, return the results, exit program on failure.
*/
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
index 6c1e1954769..0b741b68cb1 100644
--- a/src/bin/pg_dump/connectdb.h
+++ b/src/bin/pg_dump/connectdb.h
@@ -22,5 +22,5 @@ extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version,
char *password, char *override_dbname);
-extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern PGresult *executeQuery(PGconn *conn, const char *query, bool is_archive);
#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c84b017f21b..5b8dd295070 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,15 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1330,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1709,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1730,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a00918bacb4..13e1764ec70 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..725365f6519 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,11 +77,13 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static const char *connstr = "";
-static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
static bool dosync = true;
@@ -123,6 +126,10 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +155,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +205,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +217,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +256,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -256,7 +268,7 @@ main(int argc, char *argv[])
break;
case 'c':
- output_clean = true;
+ dopt.outputClean = true;
break;
case 'd':
@@ -274,7 +286,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +328,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -419,7 +434,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (if_exists && !output_clean)
+ if (if_exists && !dopt.outputClean)
pg_fatal("option --if-exists requires option -c/--clean");
if (roles_only && tablespaces_only)
@@ -429,6 +444,25 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +523,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +593,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +627,110 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -625,7 +740,7 @@ main(int argc, char *argv[])
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
*/
- if (output_clean)
+ if (dopt.outputClean)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -659,27 +774,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, dopt.outputClean);
- PQfinish(conn);
-
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +820,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +902,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -786,12 +919,17 @@ dropRoles(PGconn *conn)
"FROM %s "
"ORDER BY 1", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +937,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -871,7 +1015,7 @@ dumpRoles(PGconn *conn)
"FROM %s "
"ORDER BY 2", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_oid = PQfnumber(res, "oid");
i_rolname = PQfnumber(res, "rolname");
@@ -889,7 +1033,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1142,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1153,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1076,7 +1226,7 @@ dumpRoleMembership(PGconn *conn)
"LEFT JOIN %s ug on ug.oid = a.grantor "
"WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')"
"ORDER BY 1,2,3", role_catalog, role_catalog, role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_role = PQfnumber(res, "role");
i_member = PQfnumber(res, "member");
i_grantor = PQfnumber(res, "grantor");
@@ -1088,7 +1238,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1322,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1379,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1401,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1421,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1285,10 +1447,15 @@ dumpRoleGUCPrivs(PGconn *conn)
"paracl, "
"pg_catalog.acldefault('p', " CppAsString2(BOOTSTRAP_SUPERUSERID) ") AS acldefault "
"FROM pg_catalog.pg_parameter_acl "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1479,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1503,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1339,23 +1512,34 @@ dropTablespaces(PGconn *conn)
res = executeQuery(conn, "SELECT spcname "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1379,10 +1563,15 @@ dumpTablespaces(PGconn *conn)
"pg_catalog.shobj_description(oid, 'pg_tablespace') "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1640,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1479,10 +1673,15 @@ dropDBs(PGconn *conn)
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY datname");
+ "ORDER BY datname", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1696,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1525,14 +1732,25 @@ dumpUserConfig(PGconn *conn, const char *username)
appendStringLiteralConn(buf, username, conn);
appendPQExpBufferChar(buf, ')');
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
if (PQntuples(res) > 0)
{
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1760,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1591,7 +1813,7 @@ expand_dbname_patterns(PGconn *conn,
exit_nicely(1);
}
- res = executeQuery(conn, query->data);
+ res = executeQuery(conn, query->data, fout ? true : false);
for (int i = 0; i < PQntuples(res); i++)
{
simple_string_list_append(names, PQgetvalue(res, i, 0));
@@ -1608,10 +1830,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool output_clean)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1850,49 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY (datname <> 'template1'), datname");
+ "ORDER BY (datname <> 'template1'), datname",
+ fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1909,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1935,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +1983,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +1996,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2005,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin,
+ dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1766,7 +2077,7 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
PGresult *res;
buildShSecLabelQuery(catalog_name, objectId, sql);
- res = executeQuery(conn, sql->data);
+ res = executeQuery(conn, sql->data, fout ? true : false);
emitShSecLabels(conn, res, buffer, objtype, objname);
PQclear(res);
@@ -1868,3 +2179,67 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..9ef84e5a9ec 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,12 +41,16 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -54,18 +58,43 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +118,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +172,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +201,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +228,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +355,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +385,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -472,6 +517,121 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the
+ * databases from map.dat, but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
+ }
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, int num, bool globals_only)
+{
+ int nerror;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ false, num, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +639,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +667,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +699,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +716,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +752,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +858,407 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data, false);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..083f5c5bf9d
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.47.3
On Thu, Nov 27, 2025 at 2:49 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Fixed. Here, I am attaching an updated patch for the review and testing.
Thanks Mahendra, please refer this scenario where restoring the
dump(database contain tablespace) throwing an error
*Steps to reproduce *
initdb (./initdb -D data) , start the server ( ./pg_ctl -D data start) ,
connect to psql terminal ( ./psql postgres)
create a directory ( \! mkdir /tmp/abc) , create a tablespace ( create
tablespace a location '/tmp/abc'); )
create a table ( create table t(n int) tablespace a; ) , insert data (
insert into t values ('a'); )
perform pg_dumpall with option -c ( ./pg_dumpall -Fc -f my.d)
try to perform pg_restore with option --no-tablespaces ( ./pg_restore
--no-tablespaces -Fc my.d -d postgres -C)
Getting this error :
"
pg_restore: error: could not execute query: ERROR: role "edb" already
exists
Command was: CREATE ROLE edb;
ALTER ROLE edb WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION
BYPASSRLS;
pg_restore: error: could not execute query: ERROR: directory
"/tmp/abc/PG_19_202511281" already in use as a tablespace
Command was: CREATE TABLESPACE a OWNER edb LOCATION '/tmp/abc';
pg_restore: warning: errors ignored on restore: 2
"
regards,
Tushar
https://www.enterprisedb.com/
On Mon, Dec 1, 2025 at 6:36 PM tushar <tushar.ahuja@enterprisedb.com> wrote:
On Thu, Nov 27, 2025 at 2:49 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:Fixed. Here, I am attaching an updated patch for the review and testing.
Thanks Mahendra, please refer this scenario where restoring the
dump(database contain tablespace) throwing an error*Steps to reproduce *
initdb (./initdb -D data) , start the server ( ./pg_ctl -D data start) ,
connect to psql terminal ( ./psql postgres)
create a directory ( \! mkdir /tmp/abc) , create a tablespace ( create
tablespace a location '/tmp/abc'); )
create a table ( create table t(n int) tablespace a; ) , insert data (
insert into t values ('a'); )
perform pg_dumpall with option -c ( ./pg_dumpall -Fc -f my.d)
try to perform pg_restore with option --no-tablespaces ( ./pg_restore
--no-tablespaces -Fc my.d -d postgres -C)
Getting this error :
"
pg_restore: error: could not execute query: ERROR: role "edb" already
exists
Command was: CREATE ROLE edb;
ALTER ROLE edb WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN
REPLICATION BYPASSRLS;pg_restore: error: could not execute query: ERROR: directory
"/tmp/abc/PG_19_202511281" already in use as a tablespace
Command was: CREATE TABLESPACE a OWNER edb LOCATION '/tmp/abc';pg_restore: warning: errors ignored on restore: 2
"
I have observed that when combining the --globals-only option with certain
other switches during a pg_restore - operation fails silently.
The attempted restore does not execute, but no error message or warning is
displayed unless the --verbose option is also used.
--this will just run without any message but objects also not going to
create
./pg_restore -Fc ok31. -C -d postgres -t mytable --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-tablespace --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-data --globals-only
with --verbose
[edb@1a1c15437e7c bin]$ ./pg_restore -Fc ok31. -C -d postgres -t myable
--globals-only -v
pg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config('search_path', '',
false);
pg_restore: implied no-schema restore
pg_restore: database restoring skipped because option -g/--globals-only was
specified
we should probably add some message there.
regards,
On Mon, Dec 1, 2025 at 10:47 PM tushar <tushar.ahuja@enterprisedb.com>
wrote:
I have observed that when combining the --globals-only option with certain
other switches during a pg_restore - operation fails silently.
The attempted restore does not execute, but no error message or warning is
displayed unless the --verbose option is also used.--this will just run without any message but objects also not going to
create
./pg_restore -Fc ok31. -C -d postgres -t mytable --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-tablespace --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-data --globals-onlywith --verbose
[edb@1a1c15437e7c bin]$ ./pg_restore -Fc ok31. -C -d postgres -t myable
--globals-only -v
pg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config('search_path', '',
false);
pg_restore: implied no-schema restore
pg_restore: database restoring skipped because option -g/--globals-only
was specifiedwe should probably add some message there.
Please refer this scenario where "--no-comments" switch is ignoring when
used with -Ft/c option of pg_dumpall
*Test Case to reproduce:*
--Connect to psql terminal , create a table and comment :
postgres=# create table t(n int);
CREATE TABLE
postgres=# insert into t values (1);
INSERT 0 1
postgres=# comment on table t is 'testing...';
COMMENT
postgres=# SELECT obj_description('public.t'::regclass, 'pg_class') AS
table_comment ;
table_comment
---------------
testing...
(1 row)
--perform pg_dumpall with
(a) -Fp (./pg_dumpall -Fp --no-comments -f dump.plain)
(b) -Ft (./pg_dumpall -Ft --no-comments -f dump.tar)
Case 1: restore (a) , just run the file (dump.plain) on psql terminal ,
fire this query :
postgres=# SELECT
obj_description('public.t'::regclass, 'pg_class') AS table_comment;
table_comment
---------------
(1 row)
Seems expected .
Case 2: restore (b) via command ( ./pg_restore -Ft dump.tar -d postgres -p
5806 -C )
fire this query :
postgres=# SELECT obj_description('public.t'::regclass, 'pg_class') AS
table_comment ;
table_comment
---------------
testing...
(1 row)
Seems not expected i.e pg_dumpall with option -Ft still taking table
comments and ignoring --no-comments switch.
regards,
Thanks Tushar for the testing and reports.
On Tue, 2 Dec 2025 at 18:45, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Mon, Dec 1, 2025 at 10:47 PM tushar <tushar.ahuja@enterprisedb.com> wrote:
I have observed that when combining the --globals-only option with certain other switches during a pg_restore - operation fails silently.
The attempted restore does not execute, but no error message or warning is displayed unless the --verbose option is also used.--this will just run without any message but objects also not going to create
./pg_restore -Fc ok31. -C -d postgres -t mytable --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-tablespace --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-data --globals-onlywith --verbose
[edb@1a1c15437e7c bin]$ ./pg_restore -Fc ok31. -C -d postgres -t myable --globals-only -v
pg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_restore: implied no-schema restore
pg_restore: database restoring skipped because option -g/--globals-only was specifiedwe should probably add some message there.
Please refer this scenario where "--no-comments" switch is ignoring when used with -Ft/c option of pg_dumpall
Test Case to reproduce:
--Connect to psql terminal , create a table and comment :
postgres=# create table t(n int);
CREATE TABLE
postgres=# insert into t values (1);
INSERT 0 1
postgres=# comment on table t is 'testing...';
COMMENT
postgres=# SELECT obj_description('public.t'::regclass, 'pg_class') AS table_comment ;
table_comment
---------------
testing...
(1 row)--perform pg_dumpall with
(a) -Fp (./pg_dumpall -Fp --no-comments -f dump.plain)
(b) -Ft (./pg_dumpall -Ft --no-comments -f dump.tar)Case 1: restore (a) , just run the file (dump.plain) on psql terminal , fire this query :
postgres=# SELECT
obj_description('public.t'::regclass, 'pg_class') AS table_comment;
table_comment
---------------(1 row)
Seems expected .Case 2: restore (b) via command ( ./pg_restore -Ft dump.tar -d postgres -p 5806 -C )
fire this query :
postgres=# SELECT obj_description('public.t'::regclass, 'pg_class') AS table_comment ;
table_comment
---------------
testing...
(1 row)Seems not expected i.e pg_dumpall with option -Ft still taking table comments and ignoring --no-comments switch.
regards,
I tried to fix these issues in the attached patch.
Here, I am attaching an updated patch for the review and testing.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v11_08122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v11_08122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From a989b60741926a089a0c2fc372cb2ef007310a96 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Mon, 8 Dec 2025 12:06:28 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v11
---
doc/src/sgml/ref/pg_dumpall.sgml | 104 ++++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 7 +-
src/bin/pg_dump/connectdb.h | 2 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 34 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 617 +++++++++++++++++++++------
src/bin/pg_dump/pg_restore.c | 609 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
15 files changed, 1713 insertions(+), 167 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8834b7ec141..75de1fee330 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
@@ -937,9 +1020,13 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
<title>Examples</title>
<para>
To dump all databases:
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput>
</screen>
</para>
@@ -956,6 +1043,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If dump was taken in non-text format, then use pg_restore to restore all databases.
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all the databases. If user don't want to restore some databases, then use
+ --exclude-pattern to skip those.
+</para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..d3e9e27003e 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -225,7 +225,7 @@ ConnectDatabase(const char *dbname, const char *connection_string,
exit_nicely(1);
}
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, false));
return conn;
}
@@ -275,7 +275,7 @@ constructConnStr(const char **keywords, const char **values)
* Run a query, return the results, exit program on failure.
*/
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
index 6c1e1954769..0b741b68cb1 100644
--- a/src/bin/pg_dump/connectdb.h
+++ b/src/bin/pg_dump/connectdb.h
@@ -22,5 +22,5 @@ extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version,
char *password, char *override_dbname);
-extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern PGresult *executeQuery(PGconn *conn, const char *query, bool is_archive);
#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c84b017f21b..d35232cd038 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,20 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
+ /* Skip if no-tablespace is given. */
+ if (ropt->noTablespace && te && te->tag && ((strcmp(te->tag, "dumpTablespaces") == 0) ||
+ (strcmp(te->tag, "dropTablespaces") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1335,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1714,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1735,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2445085dbbd..e1a1711254d 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1292,7 +1292,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index bb451c1bae1..01e3683c84b 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,11 +77,13 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static const char *connstr = "";
-static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
static bool dosync = true;
@@ -123,6 +126,10 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +155,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +205,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +217,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -246,7 +256,9 @@ main(int argc, char *argv[])
pgdumpopts = createPQExpBuffer();
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ InitDumpOptions(&dopt);
+
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -256,7 +268,7 @@ main(int argc, char *argv[])
break;
case 'c':
- output_clean = true;
+ dopt.outputClean = true;
break;
case 'd':
@@ -274,7 +286,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +328,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -419,7 +434,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (if_exists && !output_clean)
+ if (if_exists && !dopt.outputClean)
pg_fatal("option --if-exists requires option -c/--clean");
if (roles_only && tablespaces_only)
@@ -429,6 +444,25 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option -F/--format=d|c|t requires option -f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option --restrict-key can only be used with --format=plain");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -489,6 +523,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -538,19 +593,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -585,37 +627,110 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -625,7 +740,7 @@ main(int argc, char *argv[])
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
*/
- if (output_clean)
+ if (dopt.outputClean)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -659,27 +774,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
+ dumpDatabases(conn, dopt.outputClean);
- PQfinish(conn);
-
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -690,12 +820,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -770,6 +902,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -786,12 +919,17 @@ dropRoles(PGconn *conn)
"FROM %s "
"ORDER BY 1", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -799,15 +937,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -871,7 +1015,7 @@ dumpRoles(PGconn *conn)
"FROM %s "
"ORDER BY 2", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_oid = PQfnumber(res, "oid");
i_rolname = PQfnumber(res, "rolname");
@@ -889,7 +1033,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -993,7 +1142,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1001,15 +1153,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1076,7 +1226,7 @@ dumpRoleMembership(PGconn *conn)
"LEFT JOIN %s ug on ug.oid = a.grantor "
"WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')"
"ORDER BY 1,2,3", role_catalog, role_catalog, role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_role = PQfnumber(res, "role");
i_member = PQfnumber(res, "member");
i_grantor = PQfnumber(res, "grantor");
@@ -1088,7 +1238,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1167,6 +1322,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1223,8 +1379,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1245,10 +1401,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1260,7 +1421,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1285,10 +1447,15 @@ dumpRoleGUCPrivs(PGconn *conn)
"paracl, "
"pg_catalog.acldefault('p', " CppAsString2(BOOTSTRAP_SUPERUSERID) ") AS acldefault "
"FROM pg_catalog.pg_parameter_acl "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1312,14 +1479,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1331,6 +1503,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1339,23 +1512,34 @@ dropTablespaces(PGconn *conn)
res = executeQuery(conn, "SELECT spcname "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1379,10 +1563,15 @@ dumpTablespaces(PGconn *conn)
"pg_catalog.shobj_description(oid, 'pg_tablespace') "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1451,14 +1640,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1479,10 +1673,15 @@ dropDBs(PGconn *conn)
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY datname");
+ "ORDER BY datname", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1497,15 +1696,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1525,14 +1732,25 @@ dumpUserConfig(PGconn *conn, const char *username)
appendStringLiteralConn(buf, username, conn);
appendPQExpBufferChar(buf, ')');
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
if (PQntuples(res) > 0)
{
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1542,7 +1760,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1591,7 +1813,7 @@ expand_dbname_patterns(PGconn *conn,
exit_nicely(1);
}
- res = executeQuery(conn, query->data);
+ res = executeQuery(conn, query->data, fout ? true : false);
for (int i = 0; i < PQntuples(res); i++)
{
simple_string_list_append(names, PQgetvalue(res, i, 0));
@@ -1608,10 +1830,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool output_clean)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1625,19 +1850,49 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY (datname <> 'template1'), datname");
+ "ORDER BY (datname <> 'template1'), datname",
+ fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1654,7 +1909,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1669,24 +1935,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1695,6 +1983,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1704,7 +1996,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1713,17 +2005,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s -f %s %s", pg_dump_bin,
+ pgdumpopts->data, dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1766,7 +2077,7 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
PGresult *res;
buildShSecLabelQuery(catalog_name, objectId, sql);
- res = executeQuery(conn, sql->data);
+ res = executeQuery(conn, sql->data, fout ? true : false);
emitShSecLabels(conn, res, buffer, objtype, objname);
PQclear(res);
@@ -1868,3 +2179,67 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index c9776306c5c..610b2ebf96f 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,12 +41,16 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -54,18 +58,44 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+static bool data_only = false;
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
- bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,13 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option --exclude-database cannot be used together with -g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -409,6 +455,9 @@ main(int argc, char **argv)
if (opts->single_txn && opts->txn_size > 0)
pg_fatal("options -1/--single-transaction and --transaction-size cannot be used together");
+ if (data_only && globals_only)
+ pg_fatal("options -a/--data-only and -g/--globals-only cannot be used together");
+
/*
* -C is not compatible with -1, because we can't create a database inside
* a transaction block.
@@ -472,6 +521,122 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the
+ * databases from map.dat, but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option -l/--list cannot be used when restoring an archive created by pg_dumpall");
+ else if (opts->tocFile)
+ pg_fatal("option -L/--use-list cannot be used when restoring an archive created by pg_dumpall");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option -C/--create must be specified when restoring an archive created by pg_dumpall");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option -g/--globals-only was specified");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option --exclude-database can be used only when restoring an archive created by pg_dumpall");
+ }
+
+ if (globals_only)
+ pg_fatal("option -g/--globals-only can be used only when restoring an archive created by pg_dumpall");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, int num, bool globals_only)
+{
+ int nerror = 0;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ if (!data_only)
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ false, num, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -479,9 +644,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -501,25 +672,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -537,6 +704,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -553,6 +721,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -588,8 +757,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -694,3 +863,407 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data, false);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..083f5c5bf9d
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.47.3
On Mon, Dec 8, 2025 at 12:14 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
I tried to fix these issues in the attached patch.
Here, I am attaching an updated patch for the review and testing.
Thanks Mahendra, I am not able to apply the patche against the latest
sources, seems like you need to rebase it
[edb@1a1c15437e7c pg]$ git apply
/tmp/v11_08122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch
error: patch failed: src/bin/pg_dump/pg_dumpall.c:419
error: src/bin/pg_dump/pg_dumpall.c: patch does not apply
error: patch failed: src/bin/pg_dump/pg_restore.c:409
error: src/bin/pg_dump/pg_restore.c: patch does not apply
[edb@1a1c15437e7c pg]$
regards,
On Mon, 8 Dec 2025 at 22:39, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Mon, Dec 8, 2025 at 12:14 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
I tried to fix these issues in the attached patch.
Here, I am attaching an updated patch for the review and testing.
Thanks Mahendra, I am not able to apply the patche against the latest sources, seems like you need to rebase it
[edb@1a1c15437e7c pg]$ git apply /tmp/v11_08122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch
error: patch failed: src/bin/pg_dump/pg_dumpall.c:419
error: src/bin/pg_dump/pg_dumpall.c: patch does not apply
error: patch failed: src/bin/pg_dump/pg_restore.c:409
error: src/bin/pg_dump/pg_restore.c: patch does not apply
[edb@1a1c15437e7c pg]$regards,
Thanks Tushar for the report.
In the last commit, there were some changes for error messages so this
was not applying cleanly.
I have observed that when combining the --globals-only option with certain other switches during a pg_restore - operation fails silently.
The attempted restore does not execute, but no error message or warning is displayed unless the --verbose option is also used.--this will just run without any message but objects also not going to create
./pg_restore -Fc ok31. -C -d postgres -t mytable --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-tablespace --globals-only
./pg_restore -Fc ok31. -C -d postgres -no-data --globals-onlywith --verbose
[edb@1a1c15437e7c bin]$ ./pg_restore -Fc ok31. -C -d postgres -t myable --globals-only -v
pg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config('search_path', '', false);
pg_restore: implied no-schema restore
pg_restore: database restoring skipped because option -g/--globals-only was specifiedwe should probably add some message there.
All these are good to me. In a successful case, we don't receive any
error message.(expected)
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From 253baa9cca7ed9719e248d892c9b9665cc832c43 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 9 Dec 2025 00:08:18 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v12
---
doc/src/sgml/ref/pg_dumpall.sgml | 104 ++++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 7 +-
src/bin/pg_dump/connectdb.h | 2 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 34 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 618 +++++++++++++++++++++------
src/bin/pg_dump/pg_restore.c | 617 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
15 files changed, 1722 insertions(+), 167 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8834b7ec141..75de1fee330 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
@@ -937,9 +1020,13 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
<title>Examples</title>
<para>
To dump all databases:
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput>
</screen>
</para>
@@ -956,6 +1043,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If dump was taken in non-text format, then use pg_restore to restore all databases.
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all the databases. If user don't want to restore some databases, then use
+ --exclude-pattern to skip those.
+</para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index d55d53dbeea..d3e9e27003e 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -225,7 +225,7 @@ ConnectDatabase(const char *dbname, const char *connection_string,
exit_nicely(1);
}
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, false));
return conn;
}
@@ -275,7 +275,7 @@ constructConnStr(const char **keywords, const char **values)
* Run a query, return the results, exit program on failure.
*/
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
index 6c1e1954769..0b741b68cb1 100644
--- a/src/bin/pg_dump/connectdb.h
+++ b/src/bin/pg_dump/connectdb.h
@@ -22,5 +22,5 @@ extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version,
char *password, char *override_dbname);
-extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern PGresult *executeQuery(PGconn *conn, const char *query, bool is_archive);
#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index f3c669f484e..3e21aaf5780 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 086adcdc502..5974d6706fd 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c84b017f21b..d35232cd038 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,20 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
+ /* Skip if no-tablespace is given. */
+ if (ropt->noTablespace && te && te->tag && ((strcmp(te->tag, "dumpTablespaces") == 0) ||
+ (strcmp(te->tag, "dropTablespaces") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1335,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1714,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1735,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 24ad201af2f..f44d5e9d037 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1306,7 +1306,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 8fa04930399..8d4aac157ac 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,11 +77,13 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static const char *connstr = "";
-static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
static bool dosync = true;
@@ -123,6 +126,10 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +155,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +205,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +217,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -245,8 +255,9 @@ main(int argc, char *argv[])
}
pgdumpopts = createPQExpBuffer();
+ InitDumpOptions(&dopt);
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -256,7 +267,7 @@ main(int argc, char *argv[])
break;
case 'c':
- output_clean = true;
+ dopt.outputClean = true;
break;
case 'd':
@@ -274,7 +285,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +327,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -423,7 +437,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (if_exists && !output_clean)
+ if (if_exists && !dopt.outputClean)
pg_fatal("option %s requires option %s",
"--if-exists", "-c/--clean");
@@ -435,6 +449,27 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option %s=d|c|t requires option %s",
+ "-F/--format", "-f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option %s can only be used with %s=plain",
+ "--restrict-key", "--format");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -495,6 +530,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -544,19 +600,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -591,37 +634,110 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
+
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle*)fout)->connection = conn;
+ ((ArchiveHandle*)fout)->public.numWorkers = 1;
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release of
+ * our own major version. (See also version check in pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
+
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /* dumpStdStrings: put the correct escape string behavior into the archive */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the dump
+ * output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so whichever
+ * database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -631,7 +747,7 @@ main(int argc, char *argv[])
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
*/
- if (output_clean)
+ if (dopt.outputClean)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -665,27 +781,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump will
+ * handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, dopt.outputClean);
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -696,12 +827,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -776,6 +909,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -792,12 +926,17 @@ dropRoles(PGconn *conn)
"FROM %s "
"ORDER BY 1", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -805,15 +944,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -877,7 +1022,7 @@ dumpRoles(PGconn *conn)
"FROM %s "
"ORDER BY 2", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_oid = PQfnumber(res, "oid");
i_rolname = PQfnumber(res, "rolname");
@@ -895,7 +1040,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -999,7 +1149,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1007,15 +1160,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1082,7 +1233,7 @@ dumpRoleMembership(PGconn *conn)
"LEFT JOIN %s ug on ug.oid = a.grantor "
"WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')"
"ORDER BY 1,2,3", role_catalog, role_catalog, role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_role = PQfnumber(res, "role");
i_member = PQfnumber(res, "member");
i_grantor = PQfnumber(res, "grantor");
@@ -1094,7 +1245,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1173,6 +1329,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1229,8 +1386,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1251,10 +1408,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1266,7 +1428,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1291,10 +1454,15 @@ dumpRoleGUCPrivs(PGconn *conn)
"paracl, "
"pg_catalog.acldefault('p', " CppAsString2(BOOTSTRAP_SUPERUSERID) ") AS acldefault "
"FROM pg_catalog.pg_parameter_acl "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1318,14 +1486,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1337,6 +1510,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1345,23 +1519,34 @@ dropTablespaces(PGconn *conn)
res = executeQuery(conn, "SELECT spcname "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1385,10 +1570,15 @@ dumpTablespaces(PGconn *conn)
"pg_catalog.shobj_description(oid, 'pg_tablespace') "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1457,14 +1647,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1485,10 +1680,15 @@ dropDBs(PGconn *conn)
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY datname");
+ "ORDER BY datname", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1503,15 +1703,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
if_exists ? "IF EXISTS " : "",
fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1531,14 +1739,25 @@ dumpUserConfig(PGconn *conn, const char *username)
appendStringLiteralConn(buf, username, conn);
appendPQExpBufferChar(buf, ')');
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
if (PQntuples(res) > 0)
{
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1548,7 +1767,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1597,7 +1820,7 @@ expand_dbname_patterns(PGconn *conn,
exit_nicely(1);
}
- res = executeQuery(conn, query->data);
+ res = executeQuery(conn, query->data, fout ? true : false);
for (int i = 0; i < PQntuples(res); i++)
{
simple_string_list_append(names, PQgetvalue(res, i, 0));
@@ -1614,10 +1837,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool output_clean)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1631,19 +1857,49 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY (datname <> 'template1'), datname");
+ "ORDER BY (datname <> 'template1'), datname",
+ fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1660,7 +1916,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1675,24 +1942,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1701,6 +1990,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1710,7 +2003,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1719,17 +2012,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s -f %s %s", pg_dump_bin,
+ pgdumpopts->data, dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1772,7 +2084,7 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
PGresult *res;
buildShSecLabelQuery(catalog_name, objectId, sql);
- res = executeQuery(conn, sql->data);
+ res = executeQuery(conn, sql->data, fout ? true : false);
emitShSecLabels(conn, res, buffer, objtype, objname);
PQclear(res);
@@ -1874,3 +2186,67 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 84b8d410c9e..f59813965bc 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,12 +41,16 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -54,18 +58,44 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ int num, bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+static bool data_only = false;
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
- bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,14 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option %s cannot be used together with %s",
+ "--exclude-database", "-g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -420,6 +467,10 @@ main(int argc, char **argv)
pg_fatal("options %s and %s cannot be used together",
"-1/--single-transaction", "--transaction-size");
+ if (data_only && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "-a/--data-only", "-g/--globals-only");
+
/*
* -C is not compatible with -1, because we can't create a database inside
* a transaction block.
@@ -485,6 +536,128 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the
+ * databases from map.dat, but skip restoring those matching
+ * --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-l/--list");
+ else if (opts->tocFile)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-L/--use-list");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option %s must be specified when restoring an archive created by pg_dumpall",
+ "-C/--create");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);
+
+ pg_log_info("database restoring skipped because option %s was specified",
+ "-g/--globals-only");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "--exclude-database");
+ }
+
+ if (globals_only)
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "-g/--globals-only");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, 0, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, int num, bool globals_only)
+{
+ int nerror = 0;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ if (!data_only)
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ false, num, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -492,9 +665,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data || num == 0)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -514,25 +693,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -550,6 +725,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -566,6 +742,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -601,8 +778,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -707,3 +884,407 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data, false);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as
+ * there is no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, 1, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index 37d893d5e6a..083f5c5bf9d
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.47.3
On Tue, Dec 9, 2025 at 12:18 AM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
On Mon, 8 Dec 2025 at 22:39, tushar <tushar.ahuja@enterprisedb.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.
Thanks, Mahendra, please refer to this scenario where if
"--transaction-size" switch is used with pg_dumpall/pg_restore, then the
table creation fails (or the table is not created)
Steps to reproduce:
1. Connect to the psql terminal, create a table/insert rows { create table
t(n int); insert into t values (generate_series(1,15)); }
2. Perform pg_dump operation { ./pg_dumpall -Ft -f tar.dump }
3. new cluster:
try to restore with --transaction-size switch { ./pg_restore -Ft tar.dump
-C -d postgres --transaction-size=10 } = Table failed to create
I have checked via pg_dump/pg_restore using --transaction-size, and it is
working fine, i.e, table is created successfully
./pg_dump -Ft -f tar.d postgres
./pg_restore --transaction-size=10 -Ft -d new_database tar.d
regards,
On Wed, 10 Dec 2025 at 19:08, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Tue, Dec 9, 2025 at 12:18 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
On Mon, 8 Dec 2025 at 22:39, tushar <tushar.ahuja@enterprisedb.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.Thanks, Mahendra, please refer to this scenario where if "--transaction-size" switch is used with pg_dumpall/pg_restore, then the table creation fails (or the table is not created)
Steps to reproduce:
1. Connect to the psql terminal, create a table/insert rows { create table t(n int); insert into t values (generate_series(1,15)); }
2. Perform pg_dump operation { ./pg_dumpall -Ft -f tar.dump }
3. new cluster:
try to restore with --transaction-size switch { ./pg_restore -Ft tar.dump -C -d postgres --transaction-size=10 } = Table failed to createI have checked via pg_dump/pg_restore using --transaction-size, and it is working fine, i.e, table is created successfully
./pg_dump -Ft -f tar.d postgres
./pg_restore --transaction-size=10 -Ft -d new_database tar.dregards,
Thanks Tushar for the report.
If transaction-size is given as non-zero, then pg_restore behaves like
"-e/--exit-on-error". means if there is any error in restore, then
exit without restoring the full cluster.
Here, in our case, as the cluster already has a role with the current
user in restore, we are reporting error "pg_restore: error: could not
execute query: ERROR: role "role" already exists" and after this
error, restore is exiting.
If you restore using a different role, then you will not get any error
and the full cluster will be restored. I will add some handling to
ignore the "CREATE ROLE current_user" command in pg_restore.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Thu, Dec 11, 2025 at 9:39 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
Here, in our case, as the cluster already has a role with the current
user in restore, we are reporting error "pg_restore: error: could not
execute query: ERROR: role "role" already exists" and after this
error, restore is exiting.If you restore using a different role, then you will not get any error
and the full cluster will be restored. I will add some handling to
ignore the "CREATE ROLE current_user" command in pg_restore.Thanks Mahendra, Could you please also add some error message for this
below
pg_restore command:
postgres=# create table t(n int);
CREATE TABLE
postgres=# insert into t values (1),(10),(100);
INSERT 0 3
Perform pg_dump: ./pg_dump -Ft -f a.a1 postgres
Perform pg_restore: /pg_restore -Ft a.a1 -f -C -v
pg_restore: creating TABLE "public.t"
pg_restore: processing data for table "public.t"
[edb@1a1c15437e7c bin]$ ./psql postgres
psql (19devel)
Type "help" for help.
postgres=# \dt
Did not find any tables.
postgres=#
regards,
On Fri, 12 Dec 2025 at 19:10, tushar <tushar.ahuja@enterprisedb.com> wrote:
On Thu, Dec 11, 2025 at 9:39 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Here, in our case, as the cluster already has a role with the current
user in restore, we are reporting error "pg_restore: error: could not
execute query: ERROR: role "role" already exists" and after this
error, restore is exiting.If you restore using a different role, then you will not get any error
and the full cluster will be restored. I will add some handling to
ignore the "CREATE ROLE current_user" command in pg_restore.Thanks Mahendra, Could you please also add some error message for this below
pg_restore command:
postgres=# create table t(n int);
CREATE TABLE
postgres=# insert into t values (1),(10),(100);
INSERT 0 3
Perform pg_dump: ./pg_dump -Ft -f a.a1 postgres
Perform pg_restore: /pg_restore -Ft a.a1 -f -C -v
pg_restore: creating TABLE "public.t"
pg_restore: processing data for table "public.t"
[edb@1a1c15437e7c bin]$ ./psql postgres
psql (19devel)
Type "help" for help.
postgres=# \dt
Did not find any tables.
postgres=#regards,
Hi Tushar,
This is the handling of command line arguments.
In code, after "-f", we expect file name, but here you are using "-C"
which will be considered as file name. This is the case for all the
command line arguments.
If pg_restore has the "-f" option, then the "-d database" name can't
be given and data will be copied into "-f filename" (it will not be
restored in the cluster).
Please let me know if you still have some doubts.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Tue, Dec 9, 2025 at 2:49 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.
hi.
attached is the pgindent diff for
v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch.
Attachments:
v12-0001-run-pgindent-for.no-cfbotapplication/octet-stream; name=v12-0001-run-pgindent-for.no-cfbotDownload
From a421ed09c07d4116a456b56714e80297941eda0a Mon Sep 17 00:00:00 2001
From: jian he <jian.universality@gmail.com>
Date: Thu, 1 Jan 2026 13:25:33 +0800
Subject: [PATCH v12 1/1] run pgindent for
run pgindent for v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch
---
src/bin/pg_dump/pg_backup_archiver.c | 9 ++-
src/bin/pg_dump/pg_dumpall.c | 107 ++++++++++++++-------------
src/bin/pg_dump/pg_restore.c | 68 ++++++++---------
3 files changed, 95 insertions(+), 89 deletions(-)
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 9ef78bf2b1e..c385d42e406 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec, bool append_data);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -772,12 +772,13 @@ RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
/* Skip for CONNECT meta command. */
if (!ropt->filename && te && te->tag &&
- (strcmp(te->tag, "CONNECT") == 0))
+ (strcmp(te->tag, "CONNECT") == 0))
continue;
/* Skip if no-tablespace is given. */
- if (ropt->noTablespace && te && te->tag && ((strcmp(te->tag, "dumpTablespaces") == 0) ||
- (strcmp(te->tag, "dropTablespaces") == 0)))
+ if (ropt->noTablespace && te && te->tag &&
+ ((strcmp(te->tag, "dumpTablespaces") == 0) ||
+ (strcmp(te->tag, "dropTablespaces") == 0)))
continue;
switch (_tocEntryRestorePass(te))
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 8d4aac157ac..791d8bb3228 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -68,7 +68,7 @@ static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -78,7 +78,7 @@ static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
static ArchiveFormat parseDumpFormat(const char *format);
-static int createDumpId(void);
+static int createDumpId(void);
static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
@@ -128,7 +128,7 @@ static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
static Archive *fout = NULL;
static pg_compress_specification compression_spec = {0};
-static int dumpIdVal = 0;
+static int dumpIdVal = 0;
static ArchiveFormat archDumpFormat = archNull;
int
@@ -460,7 +460,7 @@ main(int argc, char *argv[])
(!filename || strcmp(filename, "") == 0))
{
pg_log_error("option %s=d|c|t requires option %s",
- "-F/--format", "-f/--file");
+ "-F/--format", "-f/--file");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
@@ -468,7 +468,7 @@ main(int argc, char *argv[])
/* restrict-key is only supported with --format=plain */
if (archDumpFormat != archNull && restrict_key)
pg_fatal("option %s can only be used with %s=plain",
- "--restrict-key", "--format");
+ "--restrict-key", "--format");
/*
* If password values are not required in the dump, switch to using
@@ -640,19 +640,19 @@ main(int argc, char *argv[])
/* create a archive file for global commands. */
if (filename && archDumpFormat != archNull)
{
- char global_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
/* Set file path for global sql commands. */
snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
/* Open the output file */
fout = CreateArchive(global_path, archCustom, compression_spec,
- dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
/* Make dump options accessible right away */
SetArchiveOptions(fout, &dopt, NULL);
- ((ArchiveHandle*)fout)->connection = conn;
- ((ArchiveHandle*)fout)->public.numWorkers = 1;
+ ((ArchiveHandle *) fout)->connection = conn;
+ ((ArchiveHandle *) fout)->public.numWorkers = 1;
/* Register the cleanup hook */
on_exit_close_archive(fout);
@@ -661,8 +661,9 @@ main(int argc, char *argv[])
fout->verbose = verbose;
/*
- * We allow the server to be back to 9.2, and up to any minor release of
- * our own major version. (See also version check in pg_dumpall.c.)
+ * We allow the server to be back to 9.2, and up to any minor release
+ * of our own major version. (See also version check in
+ * pg_dumpall.c.)
*/
fout->minRemoteVersion = 90200;
fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
@@ -694,14 +695,17 @@ main(int argc, char *argv[])
destroyPQExpBuffer(qry);
}
- /* dumpStdStrings: put the correct escape string behavior into the archive */
+ /*
+ * dumpStdStrings: put the correct escape string behavior into the
+ * archive
+ */
{
const char *stdstrings = std_strings ? "on" : "off";
PQExpBuffer qry = createPQExpBuffer();
pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
- stdstrings);
+ stdstrings);
createOneArchiveEntry(qry->data, "STDSTRINGS");
destroyPQExpBuffer(qry);
}
@@ -711,20 +715,20 @@ main(int argc, char *argv[])
fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
/*
- * Enter restricted mode to block any unexpected psql meta-commands. A
+ * Enter restricted mode to block any unexpected psql meta-commands. A
* malicious source might try to inject a variety of things via bogus
* responses to queries. While we cannot prevent such sources from
* affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
+ * meta-commands so that the client machine that runs psql with the
+ * dump output remains unaffected.
*/
fprintf(OPF, "\\restrict %s\n\n", restrict_key);
/*
* We used to emit \connect postgres here, but that served no purpose
* other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
+ * database. Everything we're restoring here is a global, so
+ * whichever database we're connected to at the moment is fine.
*/
/* Restore will need to write to the target cluster */
@@ -784,8 +788,8 @@ main(int argc, char *argv[])
if (archDumpFormat == archNull)
{
/*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
+ * Exit restricted mode just before dumping the databases. pg_dump
+ * will handle entering restricted mode again as appropriate.
*/
fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
}
@@ -945,8 +949,8 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(rolename));
+ if_exists ? "IF EXISTS " : "",
+ fmtId(rolename));
if (archDumpFormat == archNull)
fprintf(OPF, "%s", delQry->data);
@@ -1534,8 +1538,8 @@ dropTablespaces(PGconn *conn)
char *spcname = PQgetvalue(res, i, 0);
appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(spcname));
+ if_exists ? "IF EXISTS " : "",
+ fmtId(spcname));
if (archDumpFormat == archNull)
fprintf(OPF, "%s", delQry->data);
@@ -1706,8 +1710,8 @@ dropDBs(PGconn *conn)
PQExpBuffer delQry = createPQExpBuffer();
appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(dbname));
+ if_exists ? "IF EXISTS " : "",
+ fmtId(dbname));
if (archDumpFormat == archNull)
fprintf(OPF, "%s", delQry->data);
@@ -1751,7 +1755,7 @@ dumpUserConfig(PGconn *conn, const char *username)
fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
else
{
- PQExpBuffer qry = createPQExpBuffer();
+ PQExpBuffer qry = createPQExpBuffer();
appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
createOneArchiveEntry(qry->data, "COMMENT");
@@ -1843,7 +1847,7 @@ dumpDatabases(PGconn *conn, bool output_clean)
int i;
char db_subdir[MAXPGPATH];
char dbfilepath[MAXPGPATH];
- FILE *map_file = NULL;
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1876,29 +1880,29 @@ dumpDatabases(PGconn *conn, bool output_clean)
* under the main directory and each database dump file or subdirectory
* will be created in that subdirectory by pg_dump.
*/
- if (archDumpFormat != archNull)
- {
- char map_file_path[MAXPGPATH];
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
- snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
- /* Create a subdirectory with 'databases' name under main directory. */
- if (mkdir(db_subdir, pg_dir_create_mode) != 0)
- pg_fatal("could not create directory \"%s\": %m", db_subdir);
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
- snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
- /* Create a map file (to store dboid and dbname) */
- map_file = fopen(map_file_path, PG_BINARY_W);
- if (!map_file)
- pg_fatal("could not open file \"%s\": %m", map_file_path);
- }
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- char *oid = PQgetvalue(res, i, 1);
+ char *oid = PQgetvalue(res, i, 1);
const char *create_opts = "";
int ret;
@@ -1947,7 +1951,7 @@ dumpDatabases(PGconn *conn, bool output_clean)
fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- PQExpBuffer qry = createPQExpBuffer();
+ PQExpBuffer qry = createPQExpBuffer();
appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
createOneArchiveEntry(qry->data, "CONNECT");
@@ -2019,7 +2023,7 @@ runPgDump(const char *dbname, const char *create_opts, char *dbfile)
if (archDumpFormat != archNull)
{
printfPQExpBuffer(&cmd, "\"%s\" %s -f %s %s", pg_dump_bin,
- pgdumpopts->data, dbfile, create_opts);
+ pgdumpopts->data, dbfile, create_opts);
if (archDumpFormat == archDirectory)
appendPQExpBufferStr(&cmd, " --format=directory ");
@@ -2239,14 +2243,15 @@ createDumpId(void)
static void
createOneArchiveEntry(const char *query, const char *tag)
{
- CatalogId nilCatalogId = {0, 0};
+ CatalogId nilCatalogId = {0, 0};
+
Assert(fout != NULL);
ArchiveEntry(fout,
- nilCatalogId, /* catalog ID */
- createDumpId(), /* dump ID */
- ARCHIVE_OPTS(.tag = tag,
- .description = tag,
- .section = SECTION_PRE_DATA,
- .createStmt = query));
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index f59813965bc..697f2f89d3e 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -62,11 +62,11 @@ static bool file_exists_in_directory(const char *dir, const char *filename);
static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
int numWorkers, bool append_data, int num,
bool globals_only);
-static int restore_global_objects(const char *inputFileSpec,
- RestoreOptions *opts, int numWorkers,
- int num, bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ int num, bool globals_only);
static int restore_all_databases(const char *inputFileSpec,
- SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static int get_dbnames_list_to_restore(PGconn *conn,
SimplePtrList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
@@ -82,7 +82,7 @@ typedef struct DbOidName
{
Oid oid;
char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
-} DbOidName;
+} DbOidName;
int
@@ -389,7 +389,7 @@ main(int argc, char **argv)
if (db_exclude_patterns.head != NULL && globals_only)
{
pg_log_error("option %s cannot be used together with %s",
- "--exclude-database", "-g/--globals-only");
+ "--exclude-database", "-g/--globals-only");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
exit_nicely(1);
}
@@ -469,7 +469,7 @@ main(int argc, char **argv)
if (data_only && globals_only)
pg_fatal("options %s and %s cannot be used together",
- "-a/--data-only", "-g/--globals-only");
+ "-a/--data-only", "-g/--globals-only");
/*
* -C is not compatible with -1, because we can't create a database inside
@@ -537,12 +537,11 @@ main(int argc, char **argv)
}
/*
- * If toc.glo file is present, then restore all the
- * databases from map.dat, but skip restoring those matching
- * --exclude-database patterns.
+ * If toc.glo file is present, then restore all the databases from
+ * map.dat, but skip restoring those matching --exclude-database patterns.
*/
if (inputFileSpec != NULL &&
- (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
{
/*
* Can only use --list or --use-list options with a single database
@@ -550,10 +549,10 @@ main(int argc, char **argv)
*/
if (opts->tocSummary)
pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
- "-l/--list");
+ "-l/--list");
else if (opts->tocFile)
pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
- "-L/--use-list");
+ "-L/--use-list");
/*
* To restore from a pg_dumpall archive, -C (create database) option
@@ -562,7 +561,7 @@ main(int argc, char **argv)
if (!globals_only && opts->createDB != 1)
{
pg_log_error("option %s must be specified when restoring an archive created by pg_dumpall",
- "-C/--create");
+ "-C/--create");
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
pg_log_error_hint("Individual databases can be restored using their specific archives.");
exit_nicely(1);
@@ -571,14 +570,14 @@ main(int argc, char **argv)
/* If globals-only, then return from here. */
if (globals_only)
{
- char global_path[MAXPGPATH];
+ char global_path[MAXPGPATH];
/* Set path for toc.glo file. */
snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
n_errors = restore_global_objects(global_path, opts, numWorkers, 0, globals_only);
pg_log_info("database restoring skipped because option %s was specified",
- "-g/--globals-only");
+ "-g/--globals-only");
}
else
{
@@ -596,16 +595,16 @@ main(int argc, char **argv)
{
simple_string_list_destroy(&db_exclude_patterns);
pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
- "--exclude-database");
+ "--exclude-database");
}
if (globals_only)
pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
- "-g/--globals-only");
+ "-g/--globals-only");
/* Process if toc.glo file does not exist. */
n_errors = restore_one_database(inputFileSpec, opts,
- numWorkers, false, 0, globals_only);
+ numWorkers, false, 0, globals_only);
}
/* Done, print a summary of ignored errors during restore. */
@@ -625,18 +624,19 @@ main(int argc, char **argv)
*
* If globals_only is set, then skip DROP DATABASE commands from restore.
*/
-static int restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
- int numWorkers, int num, bool globals_only)
+static int
+restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, int num, bool globals_only)
{
- int nerror = 0;
- int format = opts->format;
+ int nerror = 0;
+ int format = opts->format;
/* Set format as custom so that toc.glo file can be read. */
opts->format = archCustom;
if (!data_only)
nerror = restore_one_database(inputFileSpec, opts, numWorkers,
- false, num, globals_only);
+ false, num, globals_only);
/* Reset format value. */
opts->format = format;
@@ -1020,8 +1020,8 @@ get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oi
/*
- * If there is no map.dat file in dump, then return from here as
- * there is no database to restore.
+ * If there is no map.dat file in dump, then return from here as there is
+ * no database to restore.
*/
if (!file_exists_in_directory(dumpdirpath, "map.dat"))
{
@@ -1107,7 +1107,7 @@ restore_all_databases(const char *inputFileSpec,
bool dumpData = opts->dumpData;
bool dumpSchema = opts->dumpSchema;
bool dumpStatistics = opts->dumpSchema;
- PGconn *conn = NULL;
+ PGconn *conn = NULL;
char global_path[MAXPGPATH];
/* Set path for toc.glo file. */
@@ -1137,8 +1137,8 @@ restore_all_databases(const char *inputFileSpec,
if (opts->cparams.dbname)
{
conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
if (!conn)
pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
@@ -1149,8 +1149,8 @@ restore_all_databases(const char *inputFileSpec,
pg_log_info("trying to connect to database \"%s\"", "postgres");
conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
/* Try with template1. */
if (!conn)
@@ -1158,8 +1158,8 @@ restore_all_databases(const char *inputFileSpec,
pg_log_info("trying to connect to database \"%s\"", "template1");
conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
- opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
- false, progname, NULL, NULL, NULL, NULL);
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
}
}
}
@@ -1175,7 +1175,7 @@ restore_all_databases(const char *inputFileSpec,
PQfinish(conn);
/* Open toc.dat file and execute/append all the global sql commands. */
- n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, 0, false);
/* Exit if no db needs to be restored. */
if (dbname_oid_list.head == NULL || num_db_restore == 0)
--
2.34.1
On 2026-01-01 Th 12:29 AM, jian he wrote:
On Tue, Dec 9, 2025 at 2:49 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.hi.
attached is the pgindent diff for
v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch.
Thanks
I would normally expect to do this prior to committing (and my git hook
reminds me if I forget).
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Fri, Dec 12, 2025 at 9:47 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
[edb@1a1c15437e7c bin]$ ./psql postgres
psql (19devel)
Type "help" for help.
postgres=# \dt
Did not find any tables.
postgres=#regards,
Hi Tushar,
This is the handling of command line arguments.
In code, after "-f", we expect file name, but here you are using "-C"
which will be considered as file name. This is the case for all the
command line arguments.If pg_restore has the "-f" option, then the "-d database" name can't
be given and data will be copied into "-f filename" (it will not be
restored in the cluster).Please let me know if you still have some doubts.
Thanks Mahendra , that was very helpful.
Please refer this scenario where i am getting error like:
"psql:output_script4.sql:95: error: backslash commands are restricted; only
\unrestrict is allowed"
if i run the .sql file generated by pg_restore command
Steps to reproduce:
./pg_dumpall -Ft -f dump.tar
./pg_restore -Ft dump.tar -C -v -f output_script.sql
run this .sql file against a new cluster ( \i output.script.sql)
restore will be done successfully but there are a few error like this
psql:output_script4.sql:95: error: backslash commands are restricted; only
\unrestrict is allowed
Is this expected?
regards,
On Tue, Dec 9, 2025 at 2:49 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.
hi.
more comments about
v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch
+ In all other modes, <application>pg_dumpall</application>
first creates two files:
+ <filename>toc.dat/toc.dmp/toc.tar</filename> and
<filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and
tablespaces. The second
+ contains a mapping between database oids and names. These
files are used by
+ <application>pg_restore</application>. Data for individual
databases is placed in
+ <filename>databases</filename> subdirectory, named using the
database's <type>oid</type>.
I tried all these 3 formats, there is no "toc.dmp/toc.tar".
Am I missing something?
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput>
</screen>
The text in the <screen> section should work correctly when pasted directly into
the terminal.
but ``pg_dumpall --format=d/a/c/p -f db.out``
will error out:
``pg_dumpall: error: unrecognized output format "d/a/c/p"; please
specify "c", "d", "p", or "t"``
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
It would be nice to add some comments explaining why we don't call
PQfinish for archive format.
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry based on format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
this is only used when archDumpFormat is not archNull.
comments can change to
"This creates one archive entry for non-text archive"
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, int num, bool globals_only)
I guess, "num" means number of databases, but the name is
"restore_one_database".
seems confusing. Similarly, I am confused by
restore_global_objects parameter "num".
+ pg_log_error("option %s must be specified when restoring an archive
created by pg_dumpall",
+ "-C/--create");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their
specific archives.");
Here we report that --create must be specified.
The second pg_log_error_hint() message about restoring individual databases
seems unrelated to this requirement, and seems confusing in this context.
get_dbnames_list_to_restore
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database
option as no database connection while doing pg_restore");
is unreachable. because conn is always not NULL,
Since restore_all_databases "template1", template database "template1" is
undroppable, see ``dbcommands.c:1734``.
get_dbname_oid_list_from_mfile does not handle database names that contain
newline characters correctly.
For example:
CREATE DATABASE "test
\r";
I am unable to dump and restore a database with such a name.
On 2026-01-02 Fr 2:57 AM, tushar wrote:
On Fri, Dec 12, 2025 at 9:47 PM Mahendra Singh Thalor
<mahi6run@gmail.com> wrote:[edb@1a1c15437e7c bin]$ ./psql postgres
psql (19devel)
Type "help" for help.
postgres=# \dt
Did not find any tables.
postgres=#regards,
Hi Tushar,
This is the handling of command line arguments.
In code, after "-f", we expect file name, but here you are using "-C"
which will be considered as file name. This is the case for all the
command line arguments.If pg_restore has the "-f" option, then the "-d database" name can't
be given and data will be copied into "-f filename" (it will not be
restored in the cluster).Please let me know if you still have some doubts.
Thanks Mahendra , that was very helpful.
Please refer this scenario where i am getting error like:
"psql:output_script4.sql:95: error: backslash commands are restricted;
only \unrestrict is allowed"
if i run the .sql file generated by pg_restore commandSteps to reproduce:
./pg_dumpall -Ft -f dump.tar
./pg_restore -Ft dump.tar -C -v -f output_script.sql
run this .sql file against a new cluster ( \i output.script.sql)
restore will be done successfully but there are a few error like this
psql:output_script4.sql:95: error: backslash commands are restricted;
only \unrestrict is allowedIs this expected?
It's probably harmless, we connect to the databases further down to do
actual work. But it's also not nice. The toc.glo seems to have a bunch
of extraneous entries of type COMMENT and CONNECT. Why is that? As far
as poible this should have output pretty much identical to a plain
pg_dumpall.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Thanks Jian and Andrew.
I will fix these comments and i will post an updated patch in coming days.
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
On Sat, 3 Jan 2026 at 11:01 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
Show quoted text
On 2026-01-02 Fr 2:57 AM, tushar wrote:
On Fri, Dec 12, 2025 at 9:47 PM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:[edb@1a1c15437e7c bin]$ ./psql postgres
psql (19devel)
Type "help" for help.
postgres=# \dt
Did not find any tables.
postgres=#regards,
Hi Tushar,
This is the handling of command line arguments.
In code, after "-f", we expect file name, but here you are using "-C"
which will be considered as file name. This is the case for all the
command line arguments.If pg_restore has the "-f" option, then the "-d database" name can't
be given and data will be copied into "-f filename" (it will not be
restored in the cluster).Please let me know if you still have some doubts.
Thanks Mahendra , that was very helpful.
Please refer this scenario where i am getting error like:
"psql:output_script4.sql:95: error: backslash commands are restricted;
only \unrestrict is allowed"
if i run the .sql file generated by pg_restore commandSteps to reproduce:
./pg_dumpall -Ft -f dump.tar
./pg_restore -Ft dump.tar -C -v -f output_script.sql
run this .sql file against a new cluster ( \i output.script.sql)
restore will be done successfully but there are a few error like this
psql:output_script4.sql:95: error: backslash commands are restricted; only
\unrestrict is allowedIs this expected?
It's probably harmless, we connect to the databases further down to do
actual work. But it's also not nice. The toc.glo seems to have a bunch of
extraneous entries of type COMMENT and CONNECT. Why is that? As far as
poible this should have output pretty much identical to a plain pg_dumpall.cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2026-01-03 Sa 1:29 PM, Mahendra Singh Thalor wrote:
Thanks Jian and Andrew.
I will fix these comments and i will post an updated patch in coming
days.
You might find useful a tool I developed years ago that will output
everything that's in a TOC file. I've just updated it slightly. See
https://github.com/adunstan/DumpToc (Yaml output works best)
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Thanks Jian for the review and delta patch. I merged delta patch.
On Fri, 2 Jan 2026 at 13:35, jian he <jian.universality@gmail.com> wrote:
On Tue, Dec 9, 2025 at 2:49 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
Here, I am attaching an updated patch for the review and testing. This
can be applied on commit d0d0ba6cf66c4043501f6f7.hi.
more comments about
v12_09122025-Non-text-modes-for-pg_dumpall-correspondingly-change.patch+ In all other modes, <application>pg_dumpall</application> first creates two files: + <filename>toc.dat/toc.dmp/toc.tar</filename> and <filename>map.dat</filename>, in the directory + specified by <option>--file</option>. + The first file contains global data, such as roles and tablespaces. The second + contains a mapping between database oids and names. These files are used by + <application>pg_restore</application>. Data for individual databases is placed in + <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.I tried all these 3 formats, there is no "toc.dmp/toc.tar".
Am I missing something?
Fixed. Now we have a glo.dat file in custom format.
- + If format is given, then dump will be based on format, default plain. <screen> <prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput> +</screen> + +<screen> +<prompt>$</prompt> <userinput>pg_dumpall --format=d/a/c/p -f db.out</userinput> </screen>The text in the <screen> section should work correctly when pasted directly into
the terminal.
but ``pg_dumpall --format=d/a/c/p -f db.out``
will error out:
``pg_dumpall: error: unrecognized output format "d/a/c/p"; please
specify "c", "d", "p", or "t"``
Fixed. Added 4 examples.
PGresult * -executeQuery(PGconn *conn, const char *query) +executeQuery(PGconn *conn, const char *query, bool is_archive) { PGresult *res;@@ -287,7 +287,8 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
It would be nice to add some comments explaining why we don't call
PQfinish for archive format.
Fixed.
+/* + * createOneArchiveEntry + * + * This creates one archive entry based on format. + */ +static void +createOneArchiveEntry(const char *query, const char *tag) +{ + CatalogId nilCatalogId = {0, 0}; + Assert(fout != NULL); + + ArchiveEntry(fout, + nilCatalogId, /* catalog ID */ + createDumpId(), /* dump ID */ + ARCHIVE_OPTS(.tag = tag, + .description = tag, + .section = SECTION_PRE_DATA, + .createStmt = query)); +} this is only used when archDumpFormat is not archNull. comments can change to "This creates one archive entry for non-text archive"
Fixed.
+static int +restore_one_database(const char *inputFileSpec, RestoreOptions *opts, + int numWorkers, bool append_data, int num, bool globals_only) I guess, "num" means number of databases, but the name is "restore_one_database". seems confusing. Similarly, I am confused by restore_global_objects parameter "num".
Fixed. I removed num.
+ pg_log_error("option %s must be specified when restoring an archive created by pg_dumpall", + "-C/--create"); + pg_log_error_hint("Try \"%s --help\" for more information.", progname); + pg_log_error_hint("Individual databases can be restored using their specific archives."); Here we report that --create must be specified. The second pg_log_error_hint() message about restoring individual databases seems unrelated to this requirement, and seems confusing in this context.
We are giving some extra info if the user wants to restore one
database from the dump of pg_dumpall.
get_dbnames_list_to_restore + if (!conn && db_exclude_patterns.head != NULL) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore"); is unreachable. because conn is always not NULL, Since restore_all_databases "template1", template database "template1" is undroppable, see ``dbcommands.c:1734``.
yes, this is unreachable but we are keeping this.
get_dbname_oid_list_from_mfile does not handle database names that contain
newline characters correctly.
For example:
CREATE DATABASE "test
\r";I am unable to dump and restore a database with such a name.
We have another thread for this. We have patches also. Last year, we
planned to block these databases at creation time.
It's probably harmless, we connect to the databases further down to do actual work. But it's also not nice. The toc.glo seems to have a bunch of extraneous entries of type COMMENT and CONNECT. Why is that? As far as poible this should have output pretty much identical to a plain pg_dumpall.
cheers
andrew
If we don't dump those comments in non-text format, then the output of
"pg_restore -f filename dump_non_text" will not be the same as the
plain dump of pg_dumpall.
Here, I am attaching an updated patch for the review and testing.
Note: some of the review comments are still not fixed. I am working on
those and will post an updated patch.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Attachments:
v13_06012026-Non-text-modes-for-pg_dumpall-correspondingly-change.patchapplication/octet-stream; name=v13_06012026-Non-text-modes-for-pg_dumpall-correspondingly-change.patchDownload
From a24680592add01463cf8d672d6af50520e6c75b7 Mon Sep 17 00:00:00 2001
From: Mahendra Singh Thalor <mahi6run@gmail.com>
Date: Tue, 6 Jan 2026 11:36:32 +0530
Subject: [PATCH] Non text modes for pg_dumpall, correspondingly change
pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, toc.glo and map.dat. The
first contains commands restoring the global data in custom format, and the second
contains a map from oids to database names in text format. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing map.dat and toc.glo,
it restores the global settings from toc.glo if exist, and then
restores each database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
v13
---
doc/src/sgml/ref/pg_dumpall.sgml | 107 ++++-
doc/src/sgml/ref/pg_restore.sgml | 66 ++-
src/bin/pg_dump/connectdb.c | 13 +-
src/bin/pg_dump/connectdb.h | 2 +-
src/bin/pg_dump/meson.build | 1 +
src/bin/pg_dump/parallel.c | 10 +
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 35 +-
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_tar.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
src/bin/pg_dump/pg_dumpall.c | 635 +++++++++++++++++++++------
src/bin/pg_dump/pg_restore.c | 617 +++++++++++++++++++++++++-
src/bin/pg_dump/t/001_basic.pl | 27 ++
src/bin/pg_dump/t/007_pg_dumpall.pl | 396 +++++++++++++++++
15 files changed, 1743 insertions(+), 173 deletions(-)
mode change 100644 => 100755 src/bin/pg_dump/t/001_basic.pl
create mode 100755 src/bin/pg_dump/t/007_pg_dumpall.pl
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index 8834b7ec141..51ec4f730e0 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -16,7 +16,10 @@ PostgreSQL documentation
<refnamediv>
<refname>pg_dumpall</refname>
- <refpurpose>extract a <productname>PostgreSQL</productname> database cluster into a script file</refpurpose>
+
+ <refpurpose>
+ export a <productname>PostgreSQL</productname> database cluster as an SQL script or to other formats
+ </refpurpose>
</refnamediv>
<refsynopsisdiv>
@@ -33,7 +36,7 @@ PostgreSQL documentation
<para>
<application>pg_dumpall</application> is a utility for writing out
(<quote>dumping</quote>) all <productname>PostgreSQL</productname> databases
- of a cluster into one script file. The script file contains
+ of a cluster into an SQL script file or an archive. The output contains
<acronym>SQL</acronym> commands that can be used as input to <xref
linkend="app-psql"/> to restore the databases. It does this by
calling <xref linkend="app-pgdump"/> for each database in the cluster.
@@ -52,11 +55,16 @@ PostgreSQL documentation
</para>
<para>
- The SQL script will be written to the standard output. Use the
+ Plain text SQL scripts will be written to the standard output. Use the
<option>-f</option>/<option>--file</option> option or shell operators to
redirect it into a file.
</para>
+ <para>
+ Archives in other formats will be placed in a directory named using the
+ <option>-f</option>/<option>--file</option>, which is required in this case.
+ </para>
+
<para>
<application>pg_dumpall</application> needs to connect several
times to the <productname>PostgreSQL</productname> server (once per
@@ -131,10 +139,85 @@ PostgreSQL documentation
<para>
Send output to the specified file. If this is omitted, the
standard output is used.
+ Note: This option can only be omitted when <option>--format</option> is plain
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-F <replaceable class="parameter">format</replaceable></option></term>
+ <term><option>--format=<replaceable class="parameter">format</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the format of dump files. In plain format, all the dump data is
+ sent in a single text stream. This is the default.
+
+ In all other modes, <application>pg_dumpall</application> first creates two files:
+ <filename>toc.glo</filename> and <filename>map.dat</filename>, in the directory
+ specified by <option>--file</option>.
+ The first file contains global data, such as roles and tablespaces in custom format. The second
+ contains a mapping between database oids and names. These files are used by
+ <application>pg_restore</application>. Data for individual databases is placed in
+ <filename>databases</filename> subdirectory, named using the database's <type>oid</type>.
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>d</literal></term>
+ <term><literal>directory</literal></term>
+ <listitem>
+ <para>
+ Output directory-format archives for each database,
+ suitable for input into pg_restore. The directory
+ will have database <type>oid</type> as its name.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>p</literal></term>
+ <term><literal>plain</literal></term>
+ <listitem>
+ <para>
+ Output a plain-text SQL script file (the default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>c</literal></term>
+ <term><literal>custom</literal></term>
+ <listitem>
+ <para>
+ Output a custom-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.dmp</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>t</literal></term>
+ <term><literal>tar</literal></term>
+ <listitem>
+ <para>
+ Output a tar-format archive for each database,
+ suitable for input into pg_restore. The archive
+ will be named <filename>dboid.tar</filename> where <type>dboid</type> is the
+ <type>oid</type> of the database.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ Note: see <xref linkend="app-pgdump"/> for details
+ of how the various non plain text archives work.
+
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-g</option></term>
<term><option>--globals-only</option></term>
@@ -937,9 +1020,16 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
<title>Examples</title>
<para>
To dump all databases:
-
+ If format is given, then dump will be based on format, default plain.
<screen>
<prompt>$</prompt> <userinput>pg_dumpall > db.out</userinput>
+</screen>
+
+<screen>
+<prompt>$</prompt> <userinput>pg_dumpall --format=directory -f db.out</userinput>
+<prompt>$</prompt> <userinput>pg_dumpall --format=custom -f db.out</userinput>
+<prompt>$</prompt> <userinput>pg_dumpall --format=tar -f db.out</userinput>
+<prompt>$</prompt> <userinput>pg_dumpall --format=plain -f db.out</userinput>
</screen>
</para>
@@ -956,6 +1046,15 @@ exclude database <replaceable class="parameter">PATTERN</replaceable>
the script will attempt to drop other databases immediately, and that
will fail for the database you are connected to.
</para>
+
+ <para>
+ If dump was taken in non-text format, then use pg_restore to restore all databases.
+<screen>
+<prompt>$</prompt> <userinput>pg_restore db.out -d postgres -C</userinput>
+</screen>
+ This will restore all the databases. If user don't want to restore some databases, then use
+ --exclude-pattern to skip those.
+</para>
</refsect1>
<refsect1>
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index a468a38361a..7497b527ae6 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -18,8 +18,9 @@ PostgreSQL documentation
<refname>pg_restore</refname>
<refpurpose>
- restore a <productname>PostgreSQL</productname> database from an
- archive file created by <application>pg_dump</application>
+ restore <productname>PostgreSQL</productname> databases from archives
+ created by <application>pg_dump</application> or
+ <application>pg_dumpall</application>
</refpurpose>
</refnamediv>
@@ -38,13 +39,14 @@ PostgreSQL documentation
<para>
<application>pg_restore</application> is a utility for restoring a
- <productname>PostgreSQL</productname> database from an archive
- created by <xref linkend="app-pgdump"/> in one of the non-plain-text
+ <productname>PostgreSQL</productname> database or cluster from an archive
+ created by <xref linkend="app-pgdump"/> or
+ <xref linkend="app-pg-dumpall"/> in one of the non-plain-text
formats. It will issue the commands necessary to reconstruct the
- database to the state it was in at the time it was saved. The
- archive files also allow <application>pg_restore</application> to
+ database or cluster to the state it was in at the time it was saved. The
+ archives also allow <application>pg_restore</application> to
be selective about what is restored, or even to reorder the items
- prior to being restored. The archive files are designed to be
+ prior to being restored. The archive formats are designed to be
portable across architectures.
</para>
@@ -52,10 +54,17 @@ PostgreSQL documentation
<application>pg_restore</application> can operate in two modes.
If a database name is specified, <application>pg_restore</application>
connects to that database and restores archive contents directly into
- the database. Otherwise, a script containing the SQL
- commands necessary to rebuild the database is created and written
+ the database.
+ When restoring from a dump made by <application>pg_dumpall</application>,
+ each database will be created and then the restoration will be run in that
+ database.
+
+ Otherwise, when a database name is not specified, a script containing the SQL
+ commands necessary to rebuild the database or cluster is created and written
to a file or standard output. This script output is equivalent to
- the plain text output format of <application>pg_dump</application>.
+ the plain text output format of <application>pg_dump</application> or
+ <application>pg_dumpall</application>.
+
Some of the options controlling the output are therefore analogous to
<application>pg_dump</application> options.
</para>
@@ -152,6 +161,8 @@ PostgreSQL documentation
commands that mention this database.
Access privileges for the database itself are also restored,
unless <option>--no-acl</option> is specified.
+ <option>--create</option> is required when restoring multiple databases
+ from an archive created by <application>pg_dumpall</application>.
</para>
<para>
@@ -247,6 +258,19 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-g</option></term>
+ <term><option>--globals-only</option></term>
+ <listitem>
+ <para>
+ Restore only global objects (roles and tablespaces), no databases.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-I <replaceable class="parameter">index</replaceable></option></term>
<term><option>--index=<replaceable class="parameter">index</replaceable></option></term>
@@ -591,6 +615,28 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term>
+ <listitem>
+ <para>
+ Do not restore databases whose name matches
+ <replaceable class="parameter">pattern</replaceable>.
+ Multiple patterns can be excluded by writing multiple
+ <option>--exclude-database</option> switches. The
+ <replaceable class="parameter">pattern</replaceable> parameter is
+ interpreted as a pattern according to the same rules used by
+ <application>psql</application>'s <literal>\d</literal>
+ commands (see <xref linkend="app-psql-patterns"/>),
+ so multiple databases can also be excluded by writing wildcard
+ characters in the pattern. When using wildcards, be careful to
+ quote the pattern if needed to prevent shell wildcard expansion.
+ </para>
+ <para>
+ This option is only relevant when restoring from an archive made using <application>pg_dumpall</application>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>--filter=<replaceable class="parameter">filename</replaceable></option></term>
<listitem>
diff --git a/src/bin/pg_dump/connectdb.c b/src/bin/pg_dump/connectdb.c
index 388d29d0aeb..b12a70ff60b 100644
--- a/src/bin/pg_dump/connectdb.c
+++ b/src/bin/pg_dump/connectdb.c
@@ -225,7 +225,7 @@ ConnectDatabase(const char *dbname, const char *connection_string,
exit_nicely(1);
}
- PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL));
+ PQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL, false));
return conn;
}
@@ -275,7 +275,7 @@ constructConnStr(const char **keywords, const char **values)
* Run a query, return the results, exit program on failure.
*/
PGresult *
-executeQuery(PGconn *conn, const char *query)
+executeQuery(PGconn *conn, const char *query, bool is_archive)
{
PGresult *res;
@@ -287,7 +287,14 @@ executeQuery(PGconn *conn, const char *query)
{
pg_log_error("query failed: %s", PQerrorMessage(conn));
pg_log_error_detail("Query was: %s", query);
- PQfinish(conn);
+
+ /*
+ * When is_archive is set then we are sure that connection is registered into on_exit hook
+ * so exit_nicely will close the connection in the end. If we try to close in 2 places, we
+ * will get crash.
+ */
+ if (!is_archive)
+ PQfinish(conn);
exit_nicely(1);
}
diff --git a/src/bin/pg_dump/connectdb.h b/src/bin/pg_dump/connectdb.h
index 67813853e65..9d27b931692 100644
--- a/src/bin/pg_dump/connectdb.h
+++ b/src/bin/pg_dump/connectdb.h
@@ -22,5 +22,5 @@ extern PGconn *ConnectDatabase(const char *dbname, const char *connection_string
trivalue prompt_password, bool fail_on_error,
const char *progname, const char **connstr, int *server_version,
char *password, char *override_dbname);
-extern PGresult *executeQuery(PGconn *conn, const char *query);
+extern PGresult *executeQuery(PGconn *conn, const char *query, bool is_archive);
#endif /* CONNECTDB_H */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index 79bd5036841..7c9a475963b 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -103,6 +103,7 @@ tests += {
't/004_pg_dump_parallel.pl',
't/005_pg_dump_filterfile.pl',
't/006_pg_dump_compress.pl',
+ 't/007_pg_dumpall.pl',
't/010_dump_connstr.pl',
],
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index ddaf08faa30..22f57360444 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -333,6 +333,16 @@ on_exit_close_archive(Archive *AHX)
on_exit_nicely(archive_close_connection, &shutdown_info);
}
+/*
+ * When pg_restore restores multiple databases, then update already added entry
+ * into array for cleanup.
+ */
+void
+replace_on_exit_close_archive(Archive *AHX)
+{
+ shutdown_info.AHX = AHX;
+}
+
/*
* on_exit_nicely handler for shutting down database connections and
* worker processes cleanly.
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index d9041dad720..f631d945472 100644
--- a/src/bin/pg_dump/pg_backup.h
+++ b/src/bin/pg_dump/pg_backup.h
@@ -312,7 +312,7 @@ extern void SetArchiveOptions(Archive *AH, DumpOptions *dopt, RestoreOptions *ro
extern void ProcessArchiveRestoreOptions(Archive *AHX);
-extern void RestoreArchive(Archive *AHX);
+extern void RestoreArchive(Archive *AHX, bool append_data, bool globals_only);
/* Open an existing archive */
extern Archive *OpenArchive(const char *FileSpec, const ArchiveFormat fmt);
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 4a63f7392ae..c385d42e406 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -86,7 +86,7 @@ static int RestoringToDB(ArchiveHandle *AH);
static void dump_lo_buf(ArchiveHandle *AH);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static void SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec);
+ const pg_compress_specification compression_spec, bool append_data);
static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
@@ -339,9 +339,14 @@ ProcessArchiveRestoreOptions(Archive *AHX)
StrictNamesCheck(ropt);
}
-/* Public */
+/*
+ * RestoreArchive
+ *
+ * If append_data is set, then append data into file as we are restoring dump
+ * of multiple databases which was taken by pg_dumpall.
+ */
void
-RestoreArchive(Archive *AHX)
+RestoreArchive(Archive *AHX, bool append_data, bool globals_only)
{
ArchiveHandle *AH = (ArchiveHandle *) AHX;
RestoreOptions *ropt = AH->public.ropt;
@@ -458,7 +463,7 @@ RestoreArchive(Archive *AHX)
*/
sav = SaveOutput(AH);
if (ropt->filename || ropt->compression_spec.algorithm != PG_COMPRESSION_NONE)
- SetOutput(AH, ropt->filename, ropt->compression_spec);
+ SetOutput(AH, ropt->filename, ropt->compression_spec, append_data);
ahprintf(AH, "--\n-- PostgreSQL database dump\n--\n\n");
@@ -761,6 +766,21 @@ RestoreArchive(Archive *AHX)
if ((te->reqs & (REQ_SCHEMA | REQ_DATA | REQ_STATS)) == 0)
continue; /* ignore if not to be dumped at all */
+ /* Skip DROP DATABASE if globals_only. */
+ if (globals_only && te && te->tag && (strcmp(te->tag, "DROP_DATABASE") == 0))
+ continue;
+
+ /* Skip for CONNECT meta command. */
+ if (!ropt->filename && te && te->tag &&
+ (strcmp(te->tag, "CONNECT") == 0))
+ continue;
+
+ /* Skip if no-tablespace is given. */
+ if (ropt->noTablespace && te && te->tag &&
+ ((strcmp(te->tag, "dumpTablespaces") == 0) ||
+ (strcmp(te->tag, "dropTablespaces") == 0)))
+ continue;
+
switch (_tocEntryRestorePass(te))
{
case RESTORE_PASS_MAIN:
@@ -1316,7 +1336,7 @@ PrintTOCSummary(Archive *AHX)
sav = SaveOutput(AH);
if (ropt->filename)
- SetOutput(AH, ropt->filename, out_compression_spec);
+ SetOutput(AH, ropt->filename, out_compression_spec, false);
if (strftime(stamp_str, sizeof(stamp_str), PGDUMP_STRFTIME_FMT,
localtime(&AH->createDate)) == 0)
@@ -1695,7 +1715,8 @@ archprintf(Archive *AH, const char *fmt,...)
static void
SetOutput(ArchiveHandle *AH, const char *filename,
- const pg_compress_specification compression_spec)
+ const pg_compress_specification compression_spec,
+ bool append_data)
{
CompressFileHandle *CFH;
const char *mode;
@@ -1715,7 +1736,7 @@ SetOutput(ArchiveHandle *AH, const char *filename,
else
fn = fileno(stdout);
- if (AH->mode == archModeAppend)
+ if (append_data || AH->mode == archModeAppend)
mode = PG_BINARY_A;
else
mode = PG_BINARY_W;
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 325b53fc9bd..365073b3eae 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -394,6 +394,7 @@ struct _tocEntry
extern int parallel_restore(ArchiveHandle *AH, TocEntry *te);
extern void on_exit_close_archive(Archive *AHX);
+extern void replace_on_exit_close_archive(Archive *AHX);
extern void warn_or_exit_horribly(ArchiveHandle *AH, const char *fmt,...) pg_attribute_printf(2, 3);
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index b5ba3b46dd9..818b80a9369 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -826,7 +826,7 @@ _CloseArchive(ArchiveHandle *AH)
savVerbose = AH->public.verbose;
AH->public.verbose = 0;
- RestoreArchive((Archive *) AH);
+ RestoreArchive((Archive *) AH, false, false);
SetArchiveOptions((Archive *) AH, savDopt, savRopt);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 7df56d8b1b0..8fec24725d5 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1306,7 +1306,7 @@ main(int argc, char **argv)
* right now.
*/
if (plainText)
- RestoreArchive(fout);
+ RestoreArchive(fout, false, false);
CloseArchive(fout);
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index e85f227d182..2087935ee43 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -30,6 +30,7 @@
#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
+#include "pg_backup_archiver.h"
/* version string we expect back from pg_dump */
#define PGDUMP_VERSIONSTR "pg_dump (PostgreSQL) " PG_VERSION "\n"
@@ -65,9 +66,9 @@ static void dropTablespaces(PGconn *conn);
static void dumpTablespaces(PGconn *conn);
static void dropDBs(PGconn *conn);
static void dumpUserConfig(PGconn *conn, const char *username);
-static void dumpDatabases(PGconn *conn);
+static void dumpDatabases(PGconn *conn, bool output_clean);
static void dumpTimestamp(const char *msg);
-static int runPgDump(const char *dbname, const char *create_opts);
+static int runPgDump(const char *dbname, const char *create_opts, char *dbfile);
static void buildShSecLabels(PGconn *conn,
const char *catalog_name, Oid objectId,
const char *objtype, const char *objname,
@@ -76,11 +77,13 @@ static void executeCommand(PGconn *conn, const char *query);
static void expand_dbname_patterns(PGconn *conn, SimpleStringList *patterns,
SimpleStringList *names);
static void read_dumpall_filters(const char *filename, SimpleStringList *pattern);
+static ArchiveFormat parseDumpFormat(const char *format);
+static int createDumpId(void);
+static void createOneArchiveEntry(const char *query, const char *tag);
static char pg_dump_bin[MAXPGPATH];
static PQExpBuffer pgdumpopts;
static const char *connstr = "";
-static bool output_clean = false;
static bool skip_acls = false;
static bool verbose = false;
static bool dosync = true;
@@ -123,6 +126,10 @@ static SimpleStringList database_exclude_patterns = {NULL, NULL};
static SimpleStringList database_exclude_names = {NULL, NULL};
static char *restrict_key;
+static Archive *fout = NULL;
+static pg_compress_specification compression_spec = {0};
+static int dumpIdVal = 0;
+static ArchiveFormat archDumpFormat = archNull;
int
main(int argc, char *argv[])
@@ -148,6 +155,7 @@ main(int argc, char *argv[])
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"format", required_argument, NULL, 'F'},
/*
* the following options don't have an equivalent short option letter
@@ -197,6 +205,7 @@ main(int argc, char *argv[])
char *pgdb = NULL;
char *use_role = NULL;
const char *dumpencoding = NULL;
+ const char *format_name = "p";
trivalue prompt_password = TRI_DEFAULT;
bool data_only = false;
bool globals_only = false;
@@ -208,6 +217,7 @@ main(int argc, char *argv[])
int c,
ret;
int optindex;
+ DumpOptions dopt;
pg_logging_init(argv[0]);
pg_logging_set_level(PG_LOG_WARNING);
@@ -245,8 +255,9 @@ main(int argc, char *argv[])
}
pgdumpopts = createPQExpBuffer();
+ InitDumpOptions(&dopt);
- while ((c = getopt_long(argc, argv, "acd:E:f:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:E:f:F:gh:l:Op:rsS:tU:vwWx", long_options, &optindex)) != -1)
{
switch (c)
{
@@ -256,7 +267,7 @@ main(int argc, char *argv[])
break;
case 'c':
- output_clean = true;
+ dopt.outputClean = true;
break;
case 'd':
@@ -274,7 +285,9 @@ main(int argc, char *argv[])
appendPQExpBufferStr(pgdumpopts, " -f ");
appendShellString(pgdumpopts, filename);
break;
-
+ case 'F':
+ format_name = pg_strdup(optarg);
+ break;
case 'g':
globals_only = true;
break;
@@ -314,6 +327,7 @@ main(int argc, char *argv[])
case 'U':
pguser = pg_strdup(optarg);
+ dopt.cparams.username = pg_strdup(optarg);
break;
case 'v':
@@ -423,7 +437,7 @@ main(int argc, char *argv[])
exit_nicely(1);
}
- if (if_exists && !output_clean)
+ if (if_exists && !dopt.outputClean)
pg_fatal("option %s requires option %s",
"--if-exists", "-c/--clean");
@@ -435,6 +449,27 @@ main(int argc, char *argv[])
exit_nicely(1);
}
+ /* Get format for dump. */
+ archDumpFormat = parseDumpFormat(format_name);
+
+ /*
+ * If a non-plain format is specified, a file name is also required as the
+ * path to the main directory.
+ */
+ if (archDumpFormat != archNull &&
+ (!filename || strcmp(filename, "") == 0))
+ {
+ pg_log_error("option %s=d|c|t requires option %s",
+ "-F/--format", "-f/--file");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
+ /* restrict-key is only supported with --format=plain */
+ if (archDumpFormat != archNull && restrict_key)
+ pg_fatal("option %s can only be used with %s=plain",
+ "--restrict-key", "--format");
+
/*
* If password values are not required in the dump, switch to using
* pg_roles which is equally useful, just more likely to have unrestricted
@@ -495,6 +530,27 @@ main(int argc, char *argv[])
if (sequence_data)
appendPQExpBufferStr(pgdumpopts, " --sequence-data");
+ /*
+ * Open the output file if required, otherwise use stdout. If required,
+ * then create new directory.
+ */
+ if (archDumpFormat != archNull)
+ {
+ Assert(filename);
+
+ /* Create new directory or accept the empty existing directory. */
+ create_or_open_dir(filename);
+ }
+ else if (filename)
+ {
+ OPF = fopen(filename, PG_BINARY_W);
+ if (!OPF)
+ pg_fatal("could not open output file \"%s\": %m",
+ filename);
+ }
+ else
+ OPF = stdout;
+
/*
* If you don't provide a restrict key, one will be appointed for you.
*/
@@ -544,19 +600,6 @@ main(int argc, char *argv[])
expand_dbname_patterns(conn, &database_exclude_patterns,
&database_exclude_names);
- /*
- * Open the output file if required, otherwise use stdout
- */
- if (filename)
- {
- OPF = fopen(filename, PG_BINARY_W);
- if (!OPF)
- pg_fatal("could not open output file \"%s\": %m",
- filename);
- }
- else
- OPF = stdout;
-
/*
* Set the client encoding if requested.
*/
@@ -591,37 +634,114 @@ main(int argc, char *argv[])
if (quote_all_identifiers)
executeCommand(conn, "SET quote_all_identifiers = true");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Started on");
- /*
- * Enter restricted mode to block any unexpected psql meta-commands. A
- * malicious source might try to inject a variety of things via bogus
- * responses to queries. While we cannot prevent such sources from
- * affecting the destination at restore time, we can block psql
- * meta-commands so that the client machine that runs psql with the dump
- * output remains unaffected.
- */
- fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+ /* create a archive file for global commands. */
+ if (filename && archDumpFormat != archNull)
+ {
+ char global_path[MAXPGPATH];
- /*
- * We used to emit \connect postgres here, but that served no purpose
- * other than to break things for installations without a postgres
- * database. Everything we're restoring here is a global, so whichever
- * database we're connected to at the moment is fine.
- */
+ /* Set file path for global sql commands. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", filename);
+
+ /* Open the output file */
+ fout = CreateArchive(global_path, archCustom, compression_spec,
+ dosync, archModeWrite, NULL, DATA_DIR_SYNC_METHOD_FSYNC);
+
+ /* Make dump options accessible right away */
+ SetArchiveOptions(fout, &dopt, NULL);
+ ((ArchiveHandle *) fout)->connection = conn;
+ ((ArchiveHandle *) fout)->public.numWorkers = 1;
+
+ /* Register the cleanup hook */
+ on_exit_close_archive(fout);
+
+ /* Let the archiver know how noisy to be */
+ fout->verbose = verbose;
+
+ /*
+ * We allow the server to be back to 9.2, and up to any minor release
+ * of our own major version. (See also version check in
+ * pg_dumpall.c.)
+ */
+ fout->minRemoteVersion = 90200;
+ fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;
+ fout->numWorkers = 1;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump\n--\n\n", "COMMENT");
- /* Restore will need to write to the target cluster */
- fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+ /* default_transaction_read_only = off */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving default_transaction_read_only = off");
+ appendPQExpBuffer(qry, "SET default_transaction_read_only = off;\n");
+ createOneArchiveEntry(qry->data, "DEFAULT_TRANSACTION_READ_ONLY");
+ destroyPQExpBuffer(qry);
+ }
- /* Replicate encoding and std_strings in output */
- fprintf(OPF, "SET client_encoding = '%s';\n",
- pg_encoding_to_char(encoding));
- fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
- if (strcmp(std_strings, "off") == 0)
- fprintf(OPF, "SET escape_string_warning = off;\n");
- fprintf(OPF, "\n");
+ /* dumpEncoding: put the correct encoding into the archive */
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+ const char *encname = pg_encoding_to_char(encoding);
+
+ appendPQExpBufferStr(qry, "SET client_encoding = ");
+ appendStringLiteralAH(qry, encname, fout);
+ appendPQExpBufferStr(qry, ";\n");
+
+ pg_log_info("saving encoding = %s", encname);
+ createOneArchiveEntry(qry->data, "ENCODING");
+ destroyPQExpBuffer(qry);
+ }
+
+ /*
+ * dumpStdStrings: put the correct escape string behavior into the
+ * archive
+ */
+ {
+ const char *stdstrings = std_strings ? "on" : "off";
+ PQExpBuffer qry = createPQExpBuffer();
+
+ pg_log_info("saving \"standard_conforming_strings = %s\"", stdstrings);
+ appendPQExpBuffer(qry, "SET standard_conforming_strings = '%s';\n",
+ stdstrings);
+ createOneArchiveEntry(qry->data, "STDSTRINGS");
+ destroyPQExpBuffer(qry);
+ }
+ }
+ else
+ {
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump\n--\n\n");
+
+ /*
+ * Enter restricted mode to block any unexpected psql meta-commands. A
+ * malicious source might try to inject a variety of things via bogus
+ * responses to queries. While we cannot prevent such sources from
+ * affecting the destination at restore time, we can block psql
+ * meta-commands so that the client machine that runs psql with the
+ * dump output remains unaffected.
+ */
+ fprintf(OPF, "\\restrict %s\n\n", restrict_key);
+
+ /*
+ * We used to emit \connect postgres here, but that served no purpose
+ * other than to break things for installations without a postgres
+ * database. Everything we're restoring here is a global, so
+ * whichever database we're connected to at the moment is fine.
+ */
+
+ /* Restore will need to write to the target cluster */
+ fprintf(OPF, "SET default_transaction_read_only = off;\n\n");
+
+ /* Replicate encoding and std_strings in output */
+ fprintf(OPF, "SET client_encoding = '%s';\n",
+ pg_encoding_to_char(encoding));
+ fprintf(OPF, "SET standard_conforming_strings = %s;\n", std_strings);
+ if (strcmp(std_strings, "off") == 0)
+ fprintf(OPF, "SET escape_string_warning = off;\n");
+ fprintf(OPF, "\n");
+ }
if (!data_only && !statistics_only && !no_schema)
{
@@ -631,7 +751,7 @@ main(int argc, char *argv[])
* and tablespaces never depend on each other. Roles could have
* grants to each other, but DROP ROLE will clean those up silently.
*/
- if (output_clean)
+ if (dopt.outputClean)
{
if (!globals_only && !roles_only && !tablespaces_only)
dropDBs(conn);
@@ -665,27 +785,42 @@ main(int argc, char *argv[])
dumpTablespaces(conn);
}
- /*
- * Exit restricted mode just before dumping the databases. pg_dump will
- * handle entering restricted mode again as appropriate.
- */
- fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ if (archDumpFormat == archNull)
+ {
+ /*
+ * Exit restricted mode just before dumping the databases. pg_dump
+ * will handle entering restricted mode again as appropriate.
+ */
+ fprintf(OPF, "\\unrestrict %s\n\n", restrict_key);
+ }
if (!globals_only && !roles_only && !tablespaces_only)
- dumpDatabases(conn);
-
- PQfinish(conn);
+ dumpDatabases(conn, dopt.outputClean);
- if (verbose)
+ if (verbose && archDumpFormat == archNull)
dumpTimestamp("Completed on");
- fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
- if (filename)
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- PostgreSQL database cluster dump complete\n--\n\n");
+
+ if (archDumpFormat != archNull)
+ {
+ RestoreOptions *ropt;
+
+ createOneArchiveEntry("--\n-- PostgreSQL database cluster dump complete\n--\n\n", "COMMENT");
+ ropt = NewRestoreOptions();
+ SetArchiveOptions(fout, &dopt, ropt);
+
+ /* Mark which entries should be output */
+ ProcessArchiveRestoreOptions(fout);
+ CloseArchive(fout);
+ }
+ else if (filename)
{
fclose(OPF);
/* sync the resulting file, errors are not fatal */
- if (dosync)
+ if (dosync && (archDumpFormat == archNull))
(void) fsync_fname(filename, false);
}
@@ -696,12 +831,14 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s exports a PostgreSQL database cluster as an SQL script.\n\n"), progname);
+ printf(_("%s exports a PostgreSQL database cluster as an SQL script or to other formats.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]...\n"), progname);
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
+ printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n"
+ " plain text (default))\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" -V, --version output version information, then exit\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
@@ -776,6 +913,7 @@ static void
dropRoles(PGconn *conn)
{
PQExpBuffer buf = createPQExpBuffer();
+ PQExpBuffer delQry = createPQExpBuffer();
PGresult *res;
int i_rolname;
int i;
@@ -792,12 +930,17 @@ dropRoles(PGconn *conn)
"FROM %s "
"ORDER BY 1", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_rolname = PQfnumber(res, "rolname");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -805,15 +948,21 @@ dropRoles(PGconn *conn)
rolename = PQgetvalue(res, i, i_rolname);
- fprintf(OPF, "DROP ROLE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(rolename));
+ appendPQExpBuffer(delQry, "DROP ROLE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(rolename));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropRoles");
}
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -877,7 +1026,7 @@ dumpRoles(PGconn *conn)
"FROM %s "
"ORDER BY 2", role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_oid = PQfnumber(res, "oid");
i_rolname = PQfnumber(res, "rolname");
@@ -895,7 +1044,12 @@ dumpRoles(PGconn *conn)
i_is_current_user = PQfnumber(res, "is_current_user");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Roles\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Roles\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Roles\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -999,7 +1153,10 @@ dumpRoles(PGconn *conn)
"ROLE", rolename,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoles");
}
/*
@@ -1007,15 +1164,13 @@ dumpRoles(PGconn *conn)
* We do it this way because config settings for roles could mention the
* names of other roles.
*/
- if (PQntuples(res) > 0)
- fprintf(OPF, "\n--\n-- User Configurations\n--\n");
-
for (i = 0; i < PQntuples(res); i++)
dumpUserConfig(conn, PQgetvalue(res, i, i_rolname));
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
destroyPQExpBuffer(buf);
}
@@ -1082,7 +1237,7 @@ dumpRoleMembership(PGconn *conn)
"LEFT JOIN %s ug on ug.oid = a.grantor "
"WHERE NOT (ur.rolname ~ '^pg_' AND um.rolname ~ '^pg_')"
"ORDER BY 1,2,3", role_catalog, role_catalog, role_catalog);
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
i_role = PQfnumber(res, "role");
i_member = PQfnumber(res, "member");
i_grantor = PQfnumber(res, "grantor");
@@ -1094,7 +1249,12 @@ dumpRoleMembership(PGconn *conn)
i_set_option = PQfnumber(res, "set_option");
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role memberships\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role memberships\n--\n\n", "COMMENT");
+ }
/*
* We can't dump these GRANT commands in arbitrary order, because a role
@@ -1173,6 +1333,7 @@ dumpRoleMembership(PGconn *conn)
char *grantor;
char *set_option = "true";
bool found;
+ PQExpBuffer creaQry = createPQExpBuffer();
/* If we already did this grant, don't do it again. */
if (done[i - start])
@@ -1229,8 +1390,8 @@ dumpRoleMembership(PGconn *conn)
/* Generate the actual GRANT statement. */
resetPQExpBuffer(optbuf);
- fprintf(OPF, "GRANT %s", fmtId(role));
- fprintf(OPF, " TO %s", fmtId(member));
+ appendPQExpBuffer(creaQry, "GRANT %s", fmtId(role));
+ appendPQExpBuffer(creaQry, " TO %s", fmtId(member));
if (*admin_option == 't')
appendPQExpBufferStr(optbuf, "ADMIN OPTION");
if (dump_grant_options)
@@ -1251,10 +1412,15 @@ dumpRoleMembership(PGconn *conn)
appendPQExpBufferStr(optbuf, "SET FALSE");
}
if (optbuf->data[0] != '\0')
- fprintf(OPF, " WITH %s", optbuf->data);
+ appendPQExpBuffer(creaQry, " WITH %s", optbuf->data);
if (dump_grantors)
- fprintf(OPF, " GRANTED BY %s", fmtId(grantor));
- fprintf(OPF, ";\n");
+ appendPQExpBuffer(creaQry, " GRANTED BY %s", fmtId(grantor));
+ appendPQExpBuffer(creaQry, ";\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", creaQry->data);
+ else
+ createOneArchiveEntry(creaQry->data, "dumpRoleMembership");
}
}
@@ -1266,7 +1432,8 @@ dumpRoleMembership(PGconn *conn)
PQclear(res);
destroyPQExpBuffer(buf);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1291,10 +1458,15 @@ dumpRoleGUCPrivs(PGconn *conn)
"paracl, "
"pg_catalog.acldefault('p', " CppAsString2(BOOTSTRAP_SUPERUSERID) ") AS acldefault "
"FROM pg_catalog.pg_parameter_acl "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Role privileges on configuration parameters\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Role privileges on configuration parameters\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1318,14 +1490,19 @@ dumpRoleGUCPrivs(PGconn *conn)
exit_nicely(1);
}
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpRoleGUCPrivs");
free(fparname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1337,6 +1514,7 @@ dropTablespaces(PGconn *conn)
{
PGresult *res;
int i;
+ PQExpBuffer delQry = createPQExpBuffer();
/*
* Get all tablespaces except built-in ones (which we assume are named
@@ -1345,23 +1523,34 @@ dropTablespaces(PGconn *conn)
res = executeQuery(conn, "SELECT spcname "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *spcname = PQgetvalue(res, i, 0);
- fprintf(OPF, "DROP TABLESPACE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(spcname));
+ appendPQExpBuffer(delQry, "DROP TABLESPACE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(spcname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "dropTablespaces");
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
/*
@@ -1385,10 +1574,15 @@ dumpTablespaces(PGconn *conn)
"pg_catalog.shobj_description(oid, 'pg_tablespace') "
"FROM pg_catalog.pg_tablespace "
"WHERE spcname !~ '^pg_' "
- "ORDER BY 1");
+ "ORDER BY 1", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Tablespaces\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1457,14 +1651,19 @@ dumpTablespaces(PGconn *conn)
"TABLESPACE", spcname,
buf);
- fprintf(OPF, "%s", buf->data);
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpTablespaces");
free(fspcname);
destroyPQExpBuffer(buf);
}
PQclear(res);
- fprintf(OPF, "\n\n");
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1485,10 +1684,15 @@ dropDBs(PGconn *conn)
"SELECT datname "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY datname");
+ "ORDER BY datname", fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Drop databases (except postgres and template1)\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Drop databases (except postgres and template1)\n--\n\n", "COMMENT");
+ }
for (i = 0; i < PQntuples(res); i++)
{
@@ -1503,15 +1707,23 @@ dropDBs(PGconn *conn)
strcmp(dbname, "template0") != 0 &&
strcmp(dbname, "postgres") != 0)
{
- fprintf(OPF, "DROP DATABASE %s%s;\n",
- if_exists ? "IF EXISTS " : "",
- fmtId(dbname));
+ PQExpBuffer delQry = createPQExpBuffer();
+
+ appendPQExpBuffer(delQry, "DROP DATABASE %s%s;\n",
+ if_exists ? "IF EXISTS " : "",
+ fmtId(dbname));
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", delQry->data);
+ else
+ createOneArchiveEntry(delQry->data, "DROP_DATABASE");
}
}
PQclear(res);
- fprintf(OPF, "\n\n");
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n\n");
}
@@ -1531,14 +1743,25 @@ dumpUserConfig(PGconn *conn, const char *username)
appendStringLiteralConn(buf, username, conn);
appendPQExpBufferChar(buf, ')');
- res = executeQuery(conn, buf->data);
+ res = executeQuery(conn, buf->data, fout ? true : false);
if (PQntuples(res) > 0)
{
char *sanitized;
sanitized = sanitize_line(username, true);
- fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\n--\n-- User Config \"%s\"\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
}
@@ -1548,7 +1771,11 @@ dumpUserConfig(PGconn *conn, const char *username)
makeAlterConfigCommand(conn, PQgetvalue(res, i, 0),
"ROLE", username, NULL, NULL,
buf);
- fprintf(OPF, "%s", buf->data);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "%s", buf->data);
+ else
+ createOneArchiveEntry(buf->data, "dumpUserConfig");
}
PQclear(res);
@@ -1597,7 +1824,7 @@ expand_dbname_patterns(PGconn *conn,
exit_nicely(1);
}
- res = executeQuery(conn, query->data);
+ res = executeQuery(conn, query->data, fout ? true : false);
for (int i = 0; i < PQntuples(res); i++)
{
simple_string_list_append(names, PQgetvalue(res, i, 0));
@@ -1614,10 +1841,13 @@ expand_dbname_patterns(PGconn *conn,
* Dump contents of databases.
*/
static void
-dumpDatabases(PGconn *conn)
+dumpDatabases(PGconn *conn, bool output_clean)
{
PGresult *res;
int i;
+ char db_subdir[MAXPGPATH];
+ char dbfilepath[MAXPGPATH];
+ FILE *map_file = NULL;
/*
* Skip databases marked not datallowconn, since we'd be unable to connect
@@ -1631,19 +1861,49 @@ dumpDatabases(PGconn *conn)
* doesn't have some failure mode with --clean.
*/
res = executeQuery(conn,
- "SELECT datname "
+ "SELECT datname, oid "
"FROM pg_database d "
"WHERE datallowconn AND datconnlimit != -2 "
- "ORDER BY (datname <> 'template1'), datname");
+ "ORDER BY (datname <> 'template1'), datname",
+ fout ? true : false);
if (PQntuples(res) > 0)
- fprintf(OPF, "--\n-- Databases\n--\n\n");
+ {
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Databases\n--\n\n");
+ else
+ createOneArchiveEntry("--\n-- Databases\n--\n\n", "COMMENT");
+ }
+
+ /*
+ * If directory/tar/custom format is specified, create a subdirectory
+ * under the main directory and each database dump file or subdirectory
+ * will be created in that subdirectory by pg_dump.
+ */
+ if (archDumpFormat != archNull)
+ {
+ char map_file_path[MAXPGPATH];
+
+ snprintf(db_subdir, MAXPGPATH, "%s/databases", filename);
+
+ /* Create a subdirectory with 'databases' name under main directory. */
+ if (mkdir(db_subdir, pg_dir_create_mode) != 0)
+ pg_fatal("could not create directory \"%s\": %m", db_subdir);
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", filename);
+
+ /* Create a map file (to store dboid and dbname) */
+ map_file = fopen(map_file_path, PG_BINARY_W);
+ if (!map_file)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+ }
for (i = 0; i < PQntuples(res); i++)
{
char *dbname = PQgetvalue(res, i, 0);
char *sanitized;
- const char *create_opts;
+ char *oid = PQgetvalue(res, i, 1);
+ const char *create_opts = "";
int ret;
/* Skip template0, even if it's not marked !datallowconn. */
@@ -1660,7 +1920,18 @@ dumpDatabases(PGconn *conn)
pg_log_info("dumping database \"%s\"", dbname);
sanitized = sanitize_line(dbname, true);
- fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+
+ if (archDumpFormat == archNull)
+ fprintf(OPF, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ else
+ {
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "--\n-- Database \"%s\" dump\n--\n\n", sanitized);
+ createOneArchiveEntry(qry->data, "COMMENT");
+ destroyPQExpBuffer(qry);
+ }
+
free(sanitized);
/*
@@ -1675,24 +1946,46 @@ dumpDatabases(PGconn *conn)
{
if (output_clean)
create_opts = "--clean --create";
+ /* Since pg_dump won't emit a \connect command, we must */
+ else if (archDumpFormat == archNull)
+ fprintf(OPF, "\\connect %s\n\n", dbname);
else
{
- create_opts = "";
- /* Since pg_dump won't emit a \connect command, we must */
- fprintf(OPF, "\\connect %s\n\n", dbname);
+ PQExpBuffer qry = createPQExpBuffer();
+
+ appendPQExpBuffer(qry, "\\connect %s\n\n", dbname);
+ createOneArchiveEntry(qry->data, "CONNECT");
+ destroyPQExpBuffer(qry);
}
}
else
create_opts = "--create";
- if (filename)
+ if (filename && archDumpFormat == archNull)
fclose(OPF);
- ret = runPgDump(dbname, create_opts);
+ /*
+ * If this is not a plain format dump, then append dboid and dbname to
+ * the map.dat file.
+ */
+ if (archDumpFormat != archNull)
+ {
+ if (archDumpFormat == archCustom)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid);
+ else if (archDumpFormat == archTar)
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid);
+ else
+ snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid);
+
+ /* Put one line entry for dboid and dbname in map file. */
+ fprintf(map_file, "%s %s\n", oid, dbname);
+ }
+
+ ret = runPgDump(dbname, create_opts, dbfilepath);
if (ret != 0)
pg_fatal("pg_dump failed on database \"%s\", exiting", dbname);
- if (filename)
+ if (filename && archDumpFormat == archNull)
{
OPF = fopen(filename, PG_BINARY_A);
if (!OPF)
@@ -1701,6 +1994,10 @@ dumpDatabases(PGconn *conn)
}
}
+ /* Close map file */
+ if (archDumpFormat != archNull)
+ fclose(map_file);
+
PQclear(res);
}
@@ -1710,7 +2007,7 @@ dumpDatabases(PGconn *conn)
* Run pg_dump on dbname, with specified options.
*/
static int
-runPgDump(const char *dbname, const char *create_opts)
+runPgDump(const char *dbname, const char *create_opts, char *dbfile)
{
PQExpBufferData connstrbuf;
PQExpBufferData cmd;
@@ -1719,17 +2016,36 @@ runPgDump(const char *dbname, const char *create_opts)
initPQExpBuffer(&connstrbuf);
initPQExpBuffer(&cmd);
- printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
- pgdumpopts->data, create_opts);
-
/*
- * If we have a filename, use the undocumented plain-append pg_dump
- * format.
+ * If this is not a plain format dump, then append file name and dump
+ * format to the pg_dump command to get archive dump.
*/
- if (filename)
- appendPQExpBufferStr(&cmd, " -Fa ");
+ if (archDumpFormat != archNull)
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s -f %s %s", pg_dump_bin,
+ pgdumpopts->data, dbfile, create_opts);
+
+ if (archDumpFormat == archDirectory)
+ appendPQExpBufferStr(&cmd, " --format=directory ");
+ else if (archDumpFormat == archCustom)
+ appendPQExpBufferStr(&cmd, " --format=custom ");
+ else if (archDumpFormat == archTar)
+ appendPQExpBufferStr(&cmd, " --format=tar ");
+ }
else
- appendPQExpBufferStr(&cmd, " -Fp ");
+ {
+ printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
+ pgdumpopts->data, create_opts);
+
+ /*
+ * If we have a filename, use the undocumented plain-append pg_dump
+ * format.
+ */
+ if (filename)
+ appendPQExpBufferStr(&cmd, " -Fa ");
+ else
+ appendPQExpBufferStr(&cmd, " -Fp ");
+ }
/*
* Append the database name to the already-constructed stem of connection
@@ -1772,7 +2088,7 @@ buildShSecLabels(PGconn *conn, const char *catalog_name, Oid objectId,
PGresult *res;
buildShSecLabelQuery(catalog_name, objectId, sql);
- res = executeQuery(conn, sql->data);
+ res = executeQuery(conn, sql->data, fout ? true : false);
emitShSecLabels(conn, res, buffer, objtype, objname);
PQclear(res);
@@ -1874,3 +2190,68 @@ read_dumpall_filters(const char *filename, SimpleStringList *pattern)
filter_free(&fstate);
}
+
+/*
+ * parseDumpFormat
+ *
+ * This will validate dump formats.
+ */
+static ArchiveFormat
+parseDumpFormat(const char *format)
+{
+ ArchiveFormat archDumpFormat;
+
+ if (pg_strcasecmp(format, "c") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archDumpFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archDumpFormat = archDirectory;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archDumpFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archDumpFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archDumpFormat = archTar;
+ else
+ pg_fatal("unrecognized output format \"%s\"; please specify \"c\", \"d\", \"p\", or \"t\"",
+ format);
+
+ return archDumpFormat;
+}
+
+/*
+ * createDumpId
+ *
+ * This will return next last used oid.
+ */
+static int
+createDumpId(void)
+{
+ return ++dumpIdVal;
+}
+
+/*
+ * createOneArchiveEntry
+ *
+ * This creates one archive entry for non-text dump format.
+ */
+static void
+createOneArchiveEntry(const char *query, const char *tag)
+{
+ CatalogId nilCatalogId = {0, 0};
+
+ Assert(fout != NULL);
+
+ ArchiveEntry(fout,
+ nilCatalogId, /* catalog ID */
+ createDumpId(), /* dump ID */
+ ARCHIVE_OPTS(.tag = tag,
+ .description = tag,
+ .section = SECTION_PRE_DATA,
+ .createStmt = query));
+}
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 84b8d410c9e..2c83bb65d9b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -2,7 +2,7 @@
*
* pg_restore.c
* pg_restore is an utility extracting postgres database definitions
- * from a backup archive created by pg_dump using the archiver
+ * from a backup archive created by pg_dump/pg_dumpall using the archiver
* interface.
*
* pg_restore will read the backup archive and
@@ -41,12 +41,16 @@
#include "postgres_fe.h"
#include <ctype.h>
+#include <sys/stat.h>
#ifdef HAVE_TERMIOS_H
#include <termios.h>
#endif
+#include "common/string.h"
+#include "connectdb.h"
#include "dumputils.h"
#include "fe_utils/option_utils.h"
+#include "fe_utils/string_utils.h"
#include "filter.h"
#include "getopt_long.h"
#include "parallel.h"
@@ -54,18 +58,44 @@
static void usage(const char *progname);
static void read_restore_filters(const char *filename, RestoreOptions *opts);
+static bool file_exists_in_directory(const char *dir, const char *filename);
+static int restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data,
+ bool globals_only);
+static int restore_global_objects(const char *inputFileSpec,
+ RestoreOptions *opts, int numWorkers,
+ bool globals_only);
+static int restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
+static int get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns);
+static int get_dbname_oid_list_from_mfile(const char *dumpdirpath,
+ SimplePtrList *dbname_oid_list);
+
+static bool data_only = false;
+
+/*
+ * Stores a database OID and the corresponding name.
+ */
+typedef struct DbOidName
+{
+ Oid oid;
+ char str[FLEXIBLE_ARRAY_MEMBER]; /* null-terminated string here */
+} DbOidName;
+
int
main(int argc, char **argv)
{
RestoreOptions *opts;
int c;
- int exit_code;
int numWorkers = 1;
- Archive *AH;
char *inputFileSpec;
- bool data_only = false;
bool schema_only = false;
+ int n_errors = 0;
+ bool globals_only = false;
+ SimpleStringList db_exclude_patterns = {NULL, NULL};
static int disable_triggers = 0;
static int enable_row_security = 0;
static int if_exists = 0;
@@ -89,6 +119,7 @@ main(int argc, char **argv)
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
+ {"globals-only", 0, NULL, 'g'},
{"dbname", 1, NULL, 'd'},
{"exit-on-error", 0, NULL, 'e'},
{"exclude-schema", 1, NULL, 'N'},
@@ -142,6 +173,7 @@ main(int argc, char **argv)
{"statistics-only", no_argument, &statistics_only, 1},
{"filter", required_argument, NULL, 4},
{"restrict-key", required_argument, NULL, 6},
+ {"exclude-database", required_argument, NULL, 7},
{NULL, 0, NULL, 0}
};
@@ -170,7 +202,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:gh:I:j:lL:n:N:Op:P:RsS:t:T:U:vwWx1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -197,11 +229,14 @@ main(int argc, char **argv)
if (strlen(optarg) != 0)
opts->formatName = pg_strdup(optarg);
break;
+ case 'g':
+ /* restore only global sql commands. */
+ globals_only = true;
+ break;
case 'h':
if (strlen(optarg) != 0)
opts->cparams.pghost = pg_strdup(optarg);
break;
-
case 'j': /* number of restore jobs */
if (!option_parse_int(optarg, "-j/--jobs", 1,
PG_MAX_JOBS,
@@ -321,6 +356,10 @@ main(int argc, char **argv)
opts->restrict_key = pg_strdup(optarg);
break;
+ case 7: /* database patterns to skip */
+ simple_string_list_append(&db_exclude_patterns, optarg);
+ break;
+
default:
/* getopt_long already emitted a complaint */
pg_log_error_hint("Try \"%s --help\" for more information.", progname);
@@ -347,6 +386,14 @@ main(int argc, char **argv)
if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
pg_fatal("one of -d/--dbname and -f/--file must be specified");
+ if (db_exclude_patterns.head != NULL && globals_only)
+ {
+ pg_log_error("option %s cannot be used together with %s",
+ "--exclude-database", "-g/--globals-only");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ exit_nicely(1);
+ }
+
/* Should get at most one of -d and -f, else user is confused */
if (opts->cparams.dbname)
{
@@ -420,6 +467,10 @@ main(int argc, char **argv)
pg_fatal("options %s and %s cannot be used together",
"-1/--single-transaction", "--transaction-size");
+ if (data_only && globals_only)
+ pg_fatal("options %s and %s cannot be used together",
+ "-a/--data-only", "-g/--globals-only");
+
/*
* -C is not compatible with -1, because we can't create a database inside
* a transaction block.
@@ -485,6 +536,128 @@ main(int argc, char **argv)
opts->formatName);
}
+ /*
+ * If toc.glo file is present, then restore all the databases from
+ * map.dat, but skip restoring those matching --exclude-database patterns.
+ */
+ if (inputFileSpec != NULL &&
+ (file_exists_in_directory(inputFileSpec, "toc.glo")))
+ {
+ /*
+ * Can only use --list or --use-list options with a single database
+ * dump.
+ */
+ if (opts->tocSummary)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-l/--list");
+ else if (opts->tocFile)
+ pg_fatal("option %s cannot be used when restoring an archive created by pg_dumpall",
+ "-L/--use-list");
+
+ /*
+ * To restore from a pg_dumpall archive, -C (create database) option
+ * must be specified unless we are only restoring globals.
+ */
+ if (!globals_only && opts->createDB != 1)
+ {
+ pg_log_error("option %s must be specified when restoring an archive created by pg_dumpall",
+ "-C/--create");
+ pg_log_error_hint("Try \"%s --help\" for more information.", progname);
+ pg_log_error_hint("Individual databases can be restored using their specific archives.");
+ exit_nicely(1);
+ }
+
+ /* If globals-only, then return from here. */
+ if (globals_only)
+ {
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+ n_errors = restore_global_objects(global_path, opts, numWorkers, globals_only);
+
+ pg_log_info("database restoring skipped because option %s was specified",
+ "-g/--globals-only");
+ }
+ else
+ {
+ /* Now restore all the databases from map.dat */
+ n_errors = restore_all_databases(inputFileSpec, db_exclude_patterns,
+ opts, numWorkers);
+ }
+
+ /* Free db pattern list. */
+ simple_string_list_destroy(&db_exclude_patterns);
+ }
+ else
+ {
+ if (db_exclude_patterns.head != NULL)
+ {
+ simple_string_list_destroy(&db_exclude_patterns);
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "--exclude-database");
+ }
+
+ if (globals_only)
+ pg_fatal("option %s can be used only when restoring an archive created by pg_dumpall",
+ "-g/--globals-only");
+
+ /* Process if toc.glo file does not exist. */
+ n_errors = restore_one_database(inputFileSpec, opts,
+ numWorkers, false, globals_only);
+ }
+
+ /* Done, print a summary of ignored errors during restore. */
+ if (n_errors)
+ {
+ pg_log_warning("errors ignored on restore: %d", n_errors);
+ return 1;
+ }
+
+ return 0;
+}
+
+/*
+ * restore_global_objects
+ *
+ * This restore all global objects.
+ *
+ * If globals_only is set, then skip DROP DATABASE commands from restore.
+ */
+static int
+restore_global_objects(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool globals_only)
+{
+ int nerror = 0;
+ int format = opts->format;
+
+ /* Set format as custom so that toc.glo file can be read. */
+ opts->format = archCustom;
+
+ if (!data_only)
+ nerror = restore_one_database(inputFileSpec, opts, numWorkers,
+ false, globals_only);
+
+ /* Reset format value. */
+ opts->format = format;
+
+ return nerror;
+}
+
+/*
+ * restore_one_database
+ *
+ * This will restore one database using toc.dat file.
+ *
+ * returns the number of errors while doing restore.
+ */
+static int
+restore_one_database(const char *inputFileSpec, RestoreOptions *opts,
+ int numWorkers, bool append_data, bool globals_only)
+{
+ Archive *AH;
+ int n_errors;
+
AH = OpenArchive(inputFileSpec, opts->format);
SetArchiveOptions(AH, NULL, opts);
@@ -492,9 +665,15 @@ main(int argc, char **argv)
/*
* We don't have a connection yet but that doesn't matter. The connection
* is initialized to NULL and if we terminate through exit_nicely() while
- * it's still NULL, the cleanup function will just be a no-op.
+ * it's still NULL, the cleanup function will just be a no-op. If we are
+ * restoring multiple databases, then only update AX handle for cleanup as
+ * the previous entry was already in the array and we had closed previous
+ * connection, so we can use the same array slot.
*/
- on_exit_close_archive(AH);
+ if (!append_data)
+ on_exit_close_archive(AH);
+ else
+ replace_on_exit_close_archive(AH);
/* Let the archiver know how noisy to be */
AH->verbose = opts->verbose;
@@ -514,25 +693,21 @@ main(int argc, char **argv)
else
{
ProcessArchiveRestoreOptions(AH);
- RestoreArchive(AH);
+ RestoreArchive(AH, append_data, globals_only);
}
- /* done, print a summary of ignored errors */
- if (AH->n_errors)
- pg_log_warning("errors ignored on restore: %d", AH->n_errors);
+ n_errors = AH->n_errors;
/* AH may be freed in CloseArchive? */
- exit_code = AH->n_errors ? 1 : 0;
-
CloseArchive(AH);
- return exit_code;
+ return n_errors;
}
static void
usage(const char *progname)
{
- printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("%s restores PostgreSQL databases from archives created by pg_dump or pg_dumpall.\n\n"), progname);
printf(_("Usage:\n"));
printf(_(" %s [OPTION]... [FILE]\n"), progname);
@@ -550,6 +725,7 @@ usage(const char *progname)
printf(_(" -c, --clean clean (drop) database objects before recreating\n"));
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
+ printf(_(" -g, --globals-only restore only global objects, no databases\n"));
printf(_(" -I, --index=NAME restore named index\n"));
printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
@@ -566,6 +742,7 @@ usage(const char *progname)
printf(_(" -1, --single-transaction restore as a single transaction\n"));
printf(_(" --disable-triggers disable triggers during data-only restore\n"));
printf(_(" --enable-row-security enable row security\n"));
+ printf(_(" --exclude-database=PATTERN do not restore the specified database(s)\n"));
printf(_(" --filter=FILENAME restore or skip objects based on expressions\n"
" in FILENAME\n"));
printf(_(" --if-exists use IF EXISTS when dropping objects\n"));
@@ -601,8 +778,8 @@ usage(const char *progname)
printf(_(" --role=ROLENAME do SET ROLE before restore\n"));
printf(_("\n"
- "The options -I, -n, -N, -P, -t, -T, and --section can be combined and specified\n"
- "multiple times to select multiple objects.\n"));
+ "The options -I, -n, -N, -P, -t, -T, --section, and --exclude-database can be\n"
+ "combined and specified multiple times to select multiple objects.\n"));
printf(_("\nIf no input file name is supplied, then standard input is used.\n\n"));
printf(_("Report bugs to <%s>.\n"), PACKAGE_BUGREPORT);
printf(_("%s home page: <%s>\n"), PACKAGE_NAME, PACKAGE_URL);
@@ -707,3 +884,407 @@ read_restore_filters(const char *filename, RestoreOptions *opts)
filter_free(&fstate);
}
+
+/*
+ * file_exists_in_directory
+ *
+ * Returns true if the file exists in the given directory.
+ */
+static bool
+file_exists_in_directory(const char *dir, const char *filename)
+{
+ struct stat st;
+ char buf[MAXPGPATH];
+
+ if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+ pg_fatal("directory name too long: \"%s\"", dir);
+
+ return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
+/*
+ * get_dbnames_list_to_restore
+ *
+ * This will mark for skipping any entries from dbname_oid_list that pattern match an
+ * entry in the db_exclude_patterns list.
+ *
+ * Returns the number of database to be restored.
+ *
+ */
+static int
+get_dbnames_list_to_restore(PGconn *conn,
+ SimplePtrList *dbname_oid_list,
+ SimpleStringList db_exclude_patterns)
+{
+ int count_db = 0;
+ PQExpBuffer query;
+ PGresult *res;
+
+ query = createPQExpBuffer();
+
+ if (!conn && db_exclude_patterns.head != NULL)
+ pg_log_info("considering PATTERN as NAME for --exclude-database option as no database connection while doing pg_restore");
+
+ /*
+ * Process one by one all dbnames and if specified to skip restoring, then
+ * remove dbname from list.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list->head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ bool skip_db_restore = false;
+ PQExpBuffer db_lit = createPQExpBuffer();
+
+ appendStringLiteralConn(db_lit, dbidname->str, conn);
+
+ for (SimpleStringListCell *pat_cell = db_exclude_patterns.head; pat_cell; pat_cell = pat_cell->next)
+ {
+ /*
+ * If there is an exact match then we don't need to try a pattern
+ * match
+ */
+ if (pg_strcasecmp(dbidname->str, pat_cell->val) == 0)
+ skip_db_restore = true;
+ /* Otherwise, try a pattern match if there is a connection */
+ else if (conn)
+ {
+ int dotcnt;
+
+ appendPQExpBufferStr(query, "SELECT 1 ");
+ processSQLNamePattern(conn, query, pat_cell->val, false,
+ false, NULL, db_lit->data,
+ NULL, NULL, NULL, &dotcnt);
+
+ if (dotcnt > 0)
+ {
+ pg_log_error("improper qualified name (too many dotted names): %s",
+ dbidname->str);
+ PQfinish(conn);
+ exit_nicely(1);
+ }
+
+ res = executeQuery(conn, query->data, false);
+
+ if ((PQresultStatus(res) == PGRES_TUPLES_OK) && PQntuples(res))
+ {
+ skip_db_restore = true;
+ pg_log_info("database name \"%s\" matches exclude pattern \"%s\"", dbidname->str, pat_cell->val);
+ }
+
+ PQclear(res);
+ resetPQExpBuffer(query);
+ }
+
+ if (skip_db_restore)
+ break;
+ }
+
+ destroyPQExpBuffer(db_lit);
+
+ /*
+ * Mark db to be skipped or increment the counter of dbs to be
+ * restored
+ */
+ if (skip_db_restore)
+ {
+ pg_log_info("excluding database \"%s\"", dbidname->str);
+ dbidname->oid = InvalidOid;
+ }
+ else
+ {
+ count_db++;
+ }
+ }
+
+ destroyPQExpBuffer(query);
+
+ return count_db;
+}
+
+/*
+ * get_dbname_oid_list_from_mfile
+ *
+ * Open map.dat file and read line by line and then prepare a list of database
+ * names and corresponding db_oid.
+ *
+ * Returns, total number of database names in map.dat file.
+ */
+static int
+get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimplePtrList *dbname_oid_list)
+{
+ StringInfoData linebuf;
+ FILE *pfile;
+ char map_file_path[MAXPGPATH];
+ int count = 0;
+
+
+ /*
+ * If there is no map.dat file in dump, then return from here as there is
+ * no database to restore.
+ */
+ if (!file_exists_in_directory(dumpdirpath, "map.dat"))
+ {
+ pg_log_info("database restoring is skipped because file \"%s\" does not exist in directory \"%s\"", "map.dat", dumpdirpath);
+ return 0;
+ }
+
+ snprintf(map_file_path, MAXPGPATH, "%s/map.dat", dumpdirpath);
+
+ /* Open map.dat file. */
+ pfile = fopen(map_file_path, PG_BINARY_R);
+
+ if (pfile == NULL)
+ pg_fatal("could not open file \"%s\": %m", map_file_path);
+
+ initStringInfo(&linebuf);
+
+ /* Append all the dbname/db_oid combinations to the list. */
+ while (pg_get_line_buf(pfile, &linebuf))
+ {
+ Oid db_oid = InvalidOid;
+ char *dbname;
+ DbOidName *dbidname;
+ int namelen;
+ char *p = linebuf.data;
+
+ /* Extract dboid. */
+ while (isdigit((unsigned char) *p))
+ p++;
+ if (p > linebuf.data && *p == ' ')
+ {
+ sscanf(linebuf.data, "%u", &db_oid);
+ p++;
+ }
+
+ /* dbname is the rest of the line */
+ dbname = p;
+ namelen = strlen(dbname);
+
+ /* Report error and exit if the file has any corrupted data. */
+ if (!OidIsValid(db_oid) || namelen <= 1)
+ pg_fatal("invalid entry in file \"%s\" on line %d", map_file_path,
+ count + 1);
+
+ pg_log_info("found database \"%s\" (OID: %u) in file \"%s\"",
+ dbname, db_oid, map_file_path);
+
+ dbidname = pg_malloc(offsetof(DbOidName, str) + namelen + 1);
+ dbidname->oid = db_oid;
+ strlcpy(dbidname->str, dbname, namelen);
+
+ simple_ptr_list_append(dbname_oid_list, dbidname);
+ count++;
+ }
+
+ /* Close map.dat file. */
+ fclose(pfile);
+
+ return count;
+}
+
+/*
+ * restore_all_databases
+ *
+ * This will restore databases those dumps are present in
+ * directory based on map.dat file mapping.
+ *
+ * This will skip restoring for databases that are specified with
+ * exclude-database option.
+ *
+ * returns, number of errors while doing restore.
+ */
+static int
+restore_all_databases(const char *inputFileSpec,
+ SimpleStringList db_exclude_patterns, RestoreOptions *opts,
+ int numWorkers)
+{
+ SimplePtrList dbname_oid_list = {NULL, NULL};
+ int num_db_restore = 0;
+ int num_total_db;
+ int n_errors_total;
+ char *connected_db = NULL;
+ bool dumpData = opts->dumpData;
+ bool dumpSchema = opts->dumpSchema;
+ bool dumpStatistics = opts->dumpSchema;
+ PGconn *conn = NULL;
+ char global_path[MAXPGPATH];
+
+ /* Set path for toc.glo file. */
+ snprintf(global_path, MAXPGPATH, "%s/toc.glo", inputFileSpec);
+
+ /* Save db name to reuse it for all the database. */
+ if (opts->cparams.dbname)
+ connected_db = opts->cparams.dbname;
+
+ num_total_db = get_dbname_oid_list_from_mfile(inputFileSpec, &dbname_oid_list);
+
+ /* If map.dat has no entries, return after processing global commands. */
+ if (dbname_oid_list.head == NULL)
+ return restore_global_objects(global_path, opts, numWorkers, false);
+
+ pg_log_info(ngettext("found %d database name in \"%s\"",
+ "found %d database names in \"%s\"",
+ num_total_db),
+ num_total_db, "map.dat");
+
+ /*
+ * If exclude-patterns is given, then connect to the database to process
+ * it.
+ */
+ if (db_exclude_patterns.head != NULL)
+ {
+ if (opts->cparams.dbname)
+ {
+ conn = ConnectDatabase(opts->cparams.dbname, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ if (!conn)
+ pg_fatal("could not connect to database \"%s\"", opts->cparams.dbname);
+ }
+
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "postgres");
+
+ conn = ConnectDatabase("postgres", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+
+ /* Try with template1. */
+ if (!conn)
+ {
+ pg_log_info("trying to connect to database \"%s\"", "template1");
+
+ conn = ConnectDatabase("template1", NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ }
+ }
+ }
+
+ /*
+ * filter the db list according to the exclude patterns
+ */
+ num_db_restore = get_dbnames_list_to_restore(conn, &dbname_oid_list,
+ db_exclude_patterns);
+
+ /* Close the db connection as we are done with globals and patterns. */
+ if (conn)
+ PQfinish(conn);
+
+ /* Open toc.dat file and execute/append all the global sql commands. */
+ n_errors_total = restore_global_objects(global_path, opts, numWorkers, false);
+
+ /* Exit if no db needs to be restored. */
+ if (dbname_oid_list.head == NULL || num_db_restore == 0)
+ {
+ pg_log_info(ngettext("no database needs restoring out of %d database",
+ "no database needs restoring out of %d databases", num_total_db),
+ num_total_db);
+ return n_errors_total;
+ }
+
+ pg_log_info("need to restore %d databases out of %d databases", num_db_restore, num_total_db);
+
+ /*
+ * We have a list of databases to restore after processing the
+ * exclude-database switch(es). Now we can restore them one by one.
+ */
+ for (SimplePtrListCell *db_cell = dbname_oid_list.head;
+ db_cell; db_cell = db_cell->next)
+ {
+ DbOidName *dbidname = (DbOidName *) db_cell->ptr;
+ char subdirpath[MAXPGPATH];
+ char subdirdbpath[MAXPGPATH];
+ char dbfilename[MAXPGPATH];
+ int n_errors;
+
+ /* ignore dbs marked for skipping */
+ if (dbidname->oid == InvalidOid)
+ continue;
+
+ /*
+ * We need to reset override_dbname so that objects can be restored
+ * into an already created database. (used with -d/--dbname option)
+ */
+ if (opts->cparams.override_dbname)
+ {
+ pfree(opts->cparams.override_dbname);
+ opts->cparams.override_dbname = NULL;
+ }
+
+ snprintf(subdirdbpath, MAXPGPATH, "%s/databases", inputFileSpec);
+
+ /*
+ * Look for the database dump file/dir. If there is an {oid}.tar or
+ * {oid}.dmp file, use it. Otherwise try to use a directory called
+ * {oid}
+ */
+ snprintf(dbfilename, MAXPGPATH, "%u.tar", dbidname->oid);
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.tar", inputFileSpec, dbidname->oid);
+ else
+ {
+ snprintf(dbfilename, MAXPGPATH, "%u.dmp", dbidname->oid);
+
+ if (file_exists_in_directory(subdirdbpath, dbfilename))
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u.dmp", inputFileSpec, dbidname->oid);
+ else
+ snprintf(subdirpath, MAXPGPATH, "%s/databases/%u", inputFileSpec, dbidname->oid);
+ }
+
+ pg_log_info("restoring database \"%s\"", dbidname->str);
+
+ /* If database is already created, then don't set createDB flag. */
+ if (opts->cparams.dbname)
+ {
+ PGconn *test_conn;
+
+ test_conn = ConnectDatabase(dbidname->str, NULL, opts->cparams.pghost,
+ opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT,
+ false, progname, NULL, NULL, NULL, NULL);
+ if (test_conn)
+ {
+ PQfinish(test_conn);
+
+ /* Use already created database for connection. */
+ opts->createDB = 0;
+ opts->cparams.dbname = dbidname->str;
+ }
+ else
+ {
+ /* we'll have to create it */
+ opts->createDB = 1;
+ opts->cparams.dbname = connected_db;
+ }
+ }
+
+ /*
+ * Reset flags - might have been reset in pg_backup_archiver.c by the
+ * previous restore.
+ */
+ opts->dumpData = dumpData;
+ opts->dumpSchema = dumpSchema;
+ opts->dumpStatistics = dumpStatistics;
+
+ /* Restore the single database. */
+ n_errors = restore_one_database(subdirpath, opts, numWorkers, true, false);
+
+ /* Print a summary of ignored errors during single database restore. */
+ if (n_errors)
+ {
+ n_errors_total += n_errors;
+ pg_log_warning("errors ignored on database \"%s\" restore: %d", dbidname->str, n_errors);
+ }
+ }
+
+ /* Log number of processed databases. */
+ pg_log_info("number of restored databases is %d", num_db_restore);
+
+ /* Free dbname and dboid list. */
+ simple_ptr_list_destroy(&dbname_oid_list);
+
+ return n_errors_total;
+}
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
old mode 100644
new mode 100755
index ab9310eb42b..9221d3c9f5c
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -244,4 +244,31 @@ command_fails_like(
'pg_dumpall: option --exclude-database cannot be used together with -g/--globals-only'
);
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'x' ],
+ qr/\Qpg_dumpall: error: unrecognized output format "x";\E/,
+ 'pg_dumpall: unrecognized output format');
+
+command_fails_like(
+ [ 'pg_dumpall', '--format', 'd', '--restrict-key=uu', '-f dumpfile' ],
+ qr/\Qpg_dumpall: error: option --restrict-key can only be used with --format=plain\E/,
+ 'pg_dumpall: --restrict-key can only be used with plain dump format');
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '--globals-only', '-d', 'xxx' ],
+ qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/,
+ 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--exclude-database=foo', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option --exclude-database can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --exclude-database is used in pg_restore with dump of pg_dump'
+);
+
+command_fails_like(
+ [ 'pg_restore', '--globals-only', '-d', 'xxx', 'dumpdir' ],
+ qr/\Qpg_restore: error: option -g\/--globals-only can be used only when restoring an archive created by pg_dumpall\E/,
+ 'When option --globals-only is not used in pg_restore with dump of pg_dump'
+);
done_testing();
diff --git a/src/bin/pg_dump/t/007_pg_dumpall.pl b/src/bin/pg_dump/t/007_pg_dumpall.pl
new file mode 100755
index 00000000000..3c7d2ad7c53
--- /dev/null
+++ b/src/bin/pg_dump/t/007_pg_dumpall.pl
@@ -0,0 +1,396 @@
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+my $tempdir = PostgreSQL::Test::Utils::tempdir;
+my $run_db = 'postgres';
+my $sep = $windows_os ? "\\" : "/";
+
+# Tablespace locations used by "restore_tablespace" test case.
+my $tablespace1 = "${tempdir}${sep}tbl1";
+my $tablespace2 = "${tempdir}${sep}tbl2";
+mkdir($tablespace1) || die "mkdir $tablespace1 $!";
+mkdir($tablespace2) || die "mkdir $tablespace2 $!";
+
+# Scape tablespace locations on Windows.
+$tablespace1 = $windows_os ? ($tablespace1 =~ s/\\/\\\\/gr) : $tablespace1;
+$tablespace2 = $windows_os ? ($tablespace2 =~ s/\\/\\\\/gr) : $tablespace2;
+
+# Where pg_dumpall will be executed.
+my $node = PostgreSQL::Test::Cluster->new('node');
+$node->init;
+$node->start;
+
+
+###############################################################
+# Definition of the pg_dumpall test cases to run.
+#
+# Each of these test cases are named and those names are used for fail
+# reporting and also to save the dump and restore information needed for the
+# test to assert.
+#
+# The "setup_sql" is a psql valid script that contains SQL commands to execute
+# before of actually execute the tests. The setups are all executed before of
+# any test execution.
+#
+# The "dump_cmd" and "restore_cmd" are the commands that will be executed. The
+# "restore_cmd" must have the --file flag to save the restore output so that we
+# can assert on it.
+#
+# The "like" and "unlike" is a regexp that is used to match the pg_restore
+# output. It must have at least one of then filled per test cases but it also
+# can have both. See "excluding_databases" test case for example.
+my %pgdumpall_runs = (
+ restore_roles => {
+ setup_sql => '
+ CREATE ROLE dumpall WITH ENCRYPTED PASSWORD \'admin\' SUPERUSER;
+ CREATE ROLE dumpall2 WITH REPLICATION CONNECTION LIMIT 10;',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_roles.sql",
+ "$tempdir/restore_roles",
+ ],
+ like => qr/
+ \s*\QCREATE ROLE dumpall2;\E
+ \s*\QALTER ROLE dumpall2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN REPLICATION NOBYPASSRLS CONNECTION LIMIT 10;\E
+ /xm
+ },
+
+ restore_tablespace => {
+ setup_sql => "
+ CREATE ROLE tap;
+ CREATE TABLESPACE tbl1 OWNER tap LOCATION '$tablespace1';
+ CREATE TABLESPACE tbl2 OWNER tap LOCATION '$tablespace2' WITH (seq_page_cost=1.0);",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_tablespace.sql",
+ "$tempdir/restore_tablespace",
+ ],
+ # Match "E" as optional since it is added on LOCATION when running on
+ # Windows.
+ like => qr/^
+ \n\QCREATE TABLESPACE tbl2 OWNER tap LOCATION \E(?:E)?\Q'$tablespace2';\E
+ \n\QALTER TABLESPACE tbl2 SET (seq_page_cost=1.0);\E
+ /xm,
+ },
+
+ restore_grants => {
+ setup_sql => "
+ CREATE DATABASE tapgrantsdb;
+ CREATE SCHEMA private;
+ CREATE SEQUENCE serial START 101;
+ CREATE FUNCTION fn() RETURNS void AS \$\$
+ BEGIN
+ END;
+ \$\$ LANGUAGE plpgsql;
+ CREATE ROLE super;
+ CREATE ROLE grant1;
+ CREATE ROLE grant2;
+ CREATE ROLE grant3;
+ CREATE ROLE grant4;
+ CREATE ROLE grant5;
+ CREATE ROLE grant6;
+ CREATE ROLE grant7;
+ CREATE ROLE grant8;
+
+ CREATE TABLE t (id int);
+ INSERT INTO t VALUES (1), (2), (3), (4);
+
+ GRANT SELECT ON TABLE t TO grant1;
+ GRANT INSERT ON TABLE t TO grant2;
+ GRANT ALL PRIVILEGES ON TABLE t to grant3;
+ GRANT CONNECT, CREATE ON DATABASE tapgrantsdb TO grant4;
+ GRANT USAGE, CREATE ON SCHEMA private TO grant5;
+ GRANT USAGE, SELECT, UPDATE ON SEQUENCE serial TO grant6;
+ GRANT super TO grant7;
+ GRANT EXECUTE ON FUNCTION fn() TO grant8;
+ ",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/restore_grants.sql",
+ "$tempdir/restore_grants",
+ ],
+ like => qr/^
+ \n\QGRANT super TO grant7 WITH INHERIT TRUE GRANTED BY\E
+ (.*\n)*
+ \n\QGRANT ALL ON SCHEMA private TO grant5;\E
+ (.*\n)*
+ \n\QGRANT ALL ON FUNCTION public.fn() TO grant8;\E
+ (.*\n)*
+ \n\QGRANT ALL ON SEQUENCE public.serial TO grant6;\E
+ (.*\n)*
+ \n\QGRANT SELECT ON TABLE public.t TO grant1;\E
+ \n\QGRANT INSERT ON TABLE public.t TO grant2;\E
+ \n\QGRANT ALL ON TABLE public.t TO grant3;\E
+ (.*\n)*
+ \n\QGRANT CREATE,CONNECT ON DATABASE tapgrantsdb TO grant4;\E
+ /xm,
+ },
+
+ excluding_databases => {
+ setup_sql => 'CREATE DATABASE db1;
+ \c db1
+ CREATE TABLE t1 (id int);
+ INSERT INTO t1 VALUES (1), (2), (3), (4);
+ CREATE TABLE t2 (id int);
+ INSERT INTO t2 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db2;
+ \c db2
+ CREATE TABLE t3 (id int);
+ INSERT INTO t3 VALUES (1), (2), (3), (4);
+ CREATE TABLE t4 (id int);
+ INSERT INTO t4 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex3;
+ \c dbex3
+ CREATE TABLE t5 (id int);
+ INSERT INTO t5 VALUES (1), (2), (3), (4);
+ CREATE TABLE t6 (id int);
+ INSERT INTO t6 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE dbex4;
+ \c dbex4
+ CREATE TABLE t7 (id int);
+ INSERT INTO t7 VALUES (1), (2), (3), (4);
+ CREATE TABLE t8 (id int);
+ INSERT INTO t8 VALUES (1), (2), (3), (4);
+
+ CREATE DATABASE db5;
+ \c db5
+ CREATE TABLE t9 (id int);
+ INSERT INTO t9 VALUES (1), (2), (3), (4);
+ CREATE TABLE t10 (id int);
+ INSERT INTO t10 VALUES (1), (2), (3), (4);
+ ',
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases",
+ '--exclude-database' => 'dbex*',
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/excluding_databases.sql",
+ '--exclude-database' => 'db5',
+ "$tempdir/excluding_databases",
+ ],
+ like => qr/^
+ \n\QCREATE DATABASE db1\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t1 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t2 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db2\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t3 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t4 (/xm,
+ unlike => qr/^
+ \n\QCREATE DATABASE db3\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t5 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t6 (\E
+ (.*\n)*
+ \n\QCREATE DATABASE db4\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t7 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t8 (\E
+ \n\QCREATE DATABASE db5\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t9 (\E
+ (.*\n)*
+ \n\QCREATE TABLE public.t10 (\E
+ /xm,
+ },
+
+ format_directory => {
+ setup_sql => "CREATE TABLE format_directory(a int, b boolean, c text);
+ INSERT INTO format_directory VALUES (1, true, 'name1'), (2, false, 'name2');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'directory',
+ '--file' => "$tempdir/format_directory.sql",
+ "$tempdir/format_directory",
+ ],
+ like => qr/^\n\QCOPY public.format_directory (a, b, c) FROM stdin;/xm
+ },
+
+ format_tar => {
+ setup_sql => "CREATE TABLE format_tar(a int, b boolean, c text);
+ INSERT INTO format_tar VALUES (1, false, 'name3'), (2, true, 'name4');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'tar',
+ '--file' => "$tempdir/format_tar.sql",
+ "$tempdir/format_tar",
+ ],
+ like => qr/^\n\QCOPY public.format_tar (a, b, c) FROM stdin;/xm
+ },
+
+ format_custom => {
+ setup_sql => "CREATE TABLE format_custom(a int, b boolean, c text);
+ INSERT INTO format_custom VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C',
+ '--format' => 'custom',
+ '--file' => "$tempdir/format_custom.sql",
+ "$tempdir/format_custom",
+ ],
+ like => qr/^ \n\QCOPY public.format_custom (a, b, c) FROM stdin;/xm
+ },
+
+ dump_globals_only => {
+ setup_sql => "CREATE TABLE format_dir(a int, b boolean, c text);
+ INSERT INTO format_dir VALUES (1, false, 'name5'), (2, true, 'name6');",
+ dump_cmd => [
+ 'pg_dumpall',
+ '--format' => 'directory',
+ '--globals-only',
+ '--file' => "$tempdir/dump_globals_only",
+ ],
+ restore_cmd => [
+ 'pg_restore', '-C', '--globals-only',
+ '--format' => 'directory',
+ '--file' => "$tempdir/dump_globals_only.sql",
+ "$tempdir/dump_globals_only",
+ ],
+ like => qr/
+ ^\s*\QCREATE ROLE dumpall;\E\s*\n
+ /xm
+ },);
+
+# First execute the setup_sql
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ if ($pgdumpall_runs{$run}->{setup_sql})
+ {
+ $node->safe_psql($run_db, $pgdumpall_runs{$run}->{setup_sql});
+ }
+}
+
+# Execute the tests
+foreach my $run (sort keys %pgdumpall_runs)
+{
+ # Create a new target cluster to pg_restore each test case run so that we
+ # don't need to take care of the cleanup from the target cluster after each
+ # run.
+ my $target_node = PostgreSQL::Test::Cluster->new("target_$run");
+ $target_node->init;
+ $target_node->start;
+
+ # Dumpall from node cluster.
+ $node->command_ok(\@{ $pgdumpall_runs{$run}->{dump_cmd} },
+ "$run: pg_dumpall runs");
+
+ # Restore the dump on "target_node" cluster.
+ my @restore_cmd = (
+ @{ $pgdumpall_runs{$run}->{restore_cmd} },
+ '--host', $target_node->host, '--port', $target_node->port);
+
+ my ($stdout, $stderr) = run_command(\@restore_cmd);
+
+ # pg_restore --file output file.
+ my $output_file = slurp_file("$tempdir/${run}.sql");
+
+ if ( !($pgdumpall_runs{$run}->{like})
+ && !($pgdumpall_runs{$run}->{unlike}))
+ {
+ die "missing \"like\" or \"unlike\" in test \"$run\"";
+ }
+
+ if ($pgdumpall_runs{$run}->{like})
+ {
+ like($output_file, $pgdumpall_runs{$run}->{like}, "should dump $run");
+ }
+
+ if ($pgdumpall_runs{$run}->{unlike})
+ {
+ unlike(
+ $output_file,
+ $pgdumpall_runs{$run}->{unlike},
+ "should not dump $run");
+ }
+}
+
+# Some negative test case with dump of pg_dumpall and restore using pg_restore
+# test case 1: when -C is not used in pg_restore with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom",
+ '--format' => 'custom',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -C\/--create must be specified when restoring an archive created by pg_dumpall\E/,
+ 'When -C is not used in pg_restore with dump of pg_dumpall');
+
+# test case 2: When --list option is used with dump of pg_dumpall
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '--list',
+ '--file' => "$tempdir/error_test.sql",
+ ],
+ qr/\Qpg_restore: error: option -l\/--list cannot be used when restoring an archive created by pg_dumpall\E/,
+ 'When --list is used in pg_restore with dump of pg_dumpall');
+
+# test case 3: When non-exist database is given with -d option
+$node->command_fails_like(
+ [
+ 'pg_restore',
+ "$tempdir/format_custom", '-C',
+ '--format' => 'custom',
+ '-d' => 'dbpq',
+ ],
+ qr/\QFATAL: database "dbpq" does not exist\E/,
+ 'When non-existent database is given with -d option in pg_restore with dump of pg_dumpall'
+);
+
+$node->stop('fast');
+
+done_testing();
--
2.47.3
On 2026-01-06 Tu 1:26 AM, Mahendra Singh Thalor wrote:
It's probably harmless, we connect to the databases further down to do actual work. But it's also not nice. The toc.glo seems to have a bunch of extraneous entries of type COMMENT and CONNECT. Why is that? As far as poible this should have output pretty much identical to a plain pg_dumpall.
If we don't dump those comments in non-text format, then the output of
"pg_restore -f filename dump_non_text" will not be the same as the
plain dump of pg_dumpall.Here, I am attaching an updated patch for the review and testing.
Note: some of the review comments are still not fixed. I am working on
those and will post an updated patch.
But these cases are not producing anything like identical output.
here is the diff produced by my standard test case. (I have modified the
output to unify the resttrict/unrestrict keys)
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
dumpall_difftext/plain; charset=UTF-8; name=dumpall_diffDownload
--- textout 2026-01-06 11:22:59.443353974 -0500
+++ restore_out 2026-01-06 11:22:53.249383868 -0500
@@ -1,42 +1,907 @@
--
--- PostgreSQL database cluster dump
+-- PostgreSQL database dump
--
\restrict xxxxx
-SET default_transaction_read_only = off;
+-- Dumped by pg_dump version 19devel
+SET statement_timeout = 0;
+SET lock_timeout = 0;
+SET idle_in_transaction_session_timeout = 0;
+SET transaction_timeout = 0;
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;
+SET check_function_bodies = false;
+SET xmloption = content;
+SET client_min_messages = warning;
+SET row_security = off;
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- PostgreSQL database cluster dump
+--
+
+
+
+--
+-- Name: DEFAULT_TRANSACTION_READ_ONLY; Type: DEFAULT_TRANSACTION_READ_ONLY; Schema: -; Owner: -
+--
+
+SET default_transaction_read_only = off;
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
--
-- Roles
--
+
+
+--
+-- Name: dumpRoles; Type: dumpRoles; Schema: -; Owner: -
+--
+
CREATE ROLE andrew;
ALTER ROLE andrew WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN NOREPLICATION NOBYPASSRLS;
-CREATE ROLE buildfarm;
-ALTER ROLE buildfarm WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;
+--
+-- Name: dumpRoles; Type: dumpRoles; Schema: -; Owner: -
+--
+CREATE ROLE buildfarm;
+ALTER ROLE buildfarm WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+--
+-- Databases
+--
-\unrestrict xxxxx
--
--- Databases
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
--
--
-- Database "template1" dump
--
+
+
+--
+-- Name: CONNECT; Type: CONNECT; Schema: -; Owner: -
+--
+
\connect template1
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "andrew" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_brin" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_delay_execution" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_index" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_pgrowlocks" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_postgres_fdw" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "isolation_regression_tcn" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "postgres" dump
+--
+
+
+
+--
+-- Name: CONNECT; Type: CONNECT; Schema: -; Owner: -
+--
+
+\connect postgres
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_amcheck" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_auto_explain" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_bloom" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_bool_plperl" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_btree_gin" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_btree_gist" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_citext" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_cube" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_dblink" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_dict_int" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_dict_xsyn" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_dummy_index_am" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_dummy_seclabel" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_earthdistance" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_file_fdw" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_fuzzystrmatch" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_gin" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_hstore" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_hstore_plperl" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_intarray" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_isn" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_jsonb_plperl" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_lo" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_ltree" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_nbtree" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pageinspect" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_passwordcheck" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_buffercache" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_overexplain" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_prewarm" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_surgery" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_trgm" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pg_visibility" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pgcrypto" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pgstattuple" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_plperl" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_plpgsql" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_plsample" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_pltcl" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_postgres_fdw" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_seg" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_spgist_name_ops" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_spi" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_tablefunc" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_binaryheap" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_bitmapset" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_bloomfilter" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_copy_callbacks" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_ddl_deparse" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_dsa" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_dsm_registry" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_extensions" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_ginpostinglist" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_integerset" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_lfind" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_parser" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_pg_dump" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_predtest" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_radixtree" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_rbtree" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_regex" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_resowner" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_rls_hooks" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_shm_mq" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_test_tidstore" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_tsm_system_rows" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_tsm_system_time" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_typcache" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_unaccent" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- Database "regression_xml2" dump
+--
+
+
+
+--
+-- Name: COMMENT; Type: COMMENT; Schema: -; Owner: -
+--
+
+--
+-- PostgreSQL database cluster dump complete
+--
+
+
+
+--
+-- PostgreSQL database dump complete
+--
+
+\unrestrict xxxxx
+
--
-- PostgreSQL database dump
--
@@ -59,14 +924,73 @@
SET row_security = off;
--
--- PostgreSQL database dump complete
+-- Name: template1; Type: DATABASE; Schema: -; Owner: buildfarm
--
+CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
+
+
+ALTER DATABASE template1 OWNER TO buildfarm;
+
\unrestrict xxxxx
+\connect template1
+\restrict xxxxx
+
+SET statement_timeout = 0;
+SET lock_timeout = 0;
+SET idle_in_transaction_session_timeout = 0;
+SET transaction_timeout = 0;
+SET client_encoding = 'SQL_ASCII';
+SET standard_conforming_strings = on;
+SELECT pg_catalog.set_config('search_path', '', false);
+SET check_function_bodies = false;
+SET xmloption = content;
+SET client_min_messages = warning;
+SET row_security = off;
--
--- Database "andrew" dump
+-- Name: DATABASE template1; Type: COMMENT; Schema: -; Owner: buildfarm
+--
+
+COMMENT ON DATABASE template1 IS 'default template for new databases';
+
+
+--
+-- Name: template1; Type: DATABASE PROPERTIES; Schema: -; Owner: buildfarm
+--
+
+ALTER DATABASE template1 IS_TEMPLATE = true;
+
+
+\unrestrict xxxxx
+\connect template1
+\restrict xxxxx
+
+SET statement_timeout = 0;
+SET lock_timeout = 0;
+SET idle_in_transaction_session_timeout = 0;
+SET transaction_timeout = 0;
+SET client_encoding = 'SQL_ASCII';
+SET standard_conforming_strings = on;
+SELECT pg_catalog.set_config('search_path', '', false);
+SET check_function_bodies = false;
+SET xmloption = content;
+SET client_min_messages = warning;
+SET row_security = off;
+
--
+-- Name: DATABASE template1; Type: ACL; Schema: -; Owner: buildfarm
+--
+
+REVOKE CONNECT,TEMPORARY ON DATABASE template1 FROM PUBLIC;
+GRANT CONNECT ON DATABASE template1 TO PUBLIC;
+
+
+--
+-- PostgreSQL database dump complete
+--
+
+\unrestrict xxxxx
--
-- PostgreSQL database dump
@@ -121,10 +1045,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_brin" dump
---
-
---
-- PostgreSQL database dump
--
@@ -219,10 +1139,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_delay_execution" dump
---
-
---
-- PostgreSQL database dump
--
@@ -303,10 +1219,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_index" dump
---
-
---
-- PostgreSQL database dump
--
@@ -387,10 +1299,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_pgrowlocks" dump
---
-
---
-- PostgreSQL database dump
--
@@ -485,10 +1393,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_postgres_fdw" dump
---
-
---
-- PostgreSQL database dump
--
@@ -583,10 +1487,6 @@
\unrestrict xxxxx
--
--- Database "isolation_regression_tcn" dump
---
-
---
-- PostgreSQL database dump
--
@@ -681,12 +1581,6 @@
\unrestrict xxxxx
--
--- Database "postgres" dump
---
-
-\connect postgres
-
---
-- PostgreSQL database dump
--
@@ -708,15 +1602,43 @@
SET row_security = off;
--
--- PostgreSQL database dump complete
+-- Name: postgres; Type: DATABASE; Schema: -; Owner: buildfarm
--
+CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
+
+
+ALTER DATABASE postgres OWNER TO buildfarm;
+
\unrestrict xxxxx
+\connect postgres
+\restrict xxxxx
+
+SET statement_timeout = 0;
+SET lock_timeout = 0;
+SET idle_in_transaction_session_timeout = 0;
+SET transaction_timeout = 0;
+SET client_encoding = 'SQL_ASCII';
+SET standard_conforming_strings = on;
+SELECT pg_catalog.set_config('search_path', '', false);
+SET check_function_bodies = false;
+SET xmloption = content;
+SET client_min_messages = warning;
+SET row_security = off;
--
--- Database "regression" dump
+-- Name: DATABASE postgres; Type: COMMENT; Schema: -; Owner: buildfarm
--
+COMMENT ON DATABASE postgres IS 'default administrative connection database';
+
+
+--
+-- PostgreSQL database dump complete
+--
+
+\unrestrict xxxxx
+
--
-- PostgreSQL database dump
--
@@ -25397,10 +26319,6 @@
\unrestrict xxxxx
--
--- Database "regression_amcheck" dump
---
-
---
-- PostgreSQL database dump
--
@@ -25548,10 +26466,6 @@
\unrestrict xxxxx
--
--- Database "regression_auto_explain" dump
---
-
---
-- PostgreSQL database dump
--
@@ -25632,10 +26546,6 @@
\unrestrict xxxxx
--
--- Database "regression_bloom" dump
---
-
---
-- PostgreSQL database dump
--
@@ -25772,10 +26682,6 @@
\unrestrict xxxxx
--
--- Database "regression_bool_plperl" dump
---
-
---
-- PostgreSQL database dump
--
@@ -25856,10 +26762,6 @@
\unrestrict xxxxx
--
--- Database "regression_btree_gin" dump
---
-
---
-- PostgreSQL database dump
--
@@ -26529,10 +27431,6 @@
\unrestrict xxxxx
--
--- Database "regression_btree_gist" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27217,10 +28115,6 @@
\unrestrict xxxxx
--
--- Database "regression_citext" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27447,10 +28341,6 @@
\unrestrict xxxxx
--
--- Database "regression_cube" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27567,10 +28457,6 @@
\unrestrict xxxxx
--
--- Database "regression_dblink" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27779,10 +28665,6 @@
\unrestrict xxxxx
--
--- Database "regression_dict_int" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27877,10 +28759,6 @@
\unrestrict xxxxx
--
--- Database "regression_dict_xsyn" dump
---
-
---
-- PostgreSQL database dump
--
@@ -27975,10 +28853,6 @@
\unrestrict xxxxx
--
--- Database "regression_dummy_index_am" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28088,10 +28962,6 @@
\unrestrict xxxxx
--
--- Database "regression_dummy_seclabel" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28249,10 +29119,6 @@
\unrestrict xxxxx
--
--- Database "regression_earthdistance" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28348,10 +29214,6 @@
\unrestrict xxxxx
--
--- Database "regression_file_fdw" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28454,10 +29316,6 @@
\unrestrict xxxxx
--
--- Database "regression_fuzzystrmatch" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28552,10 +29410,6 @@
\unrestrict xxxxx
--
--- Database "regression_gin" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28765,10 +29619,6 @@
\unrestrict xxxxx
--
--- Database "regression_hstore" dump
---
-
---
-- PostgreSQL database dump
--
@@ -28935,10 +29785,6 @@
\unrestrict xxxxx
--
--- Database "regression_hstore_plperl" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29019,10 +29865,6 @@
\unrestrict xxxxx
--
--- Database "regression_intarray" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29151,10 +29993,6 @@
\unrestrict xxxxx
--
--- Database "regression_isn" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29235,10 +30073,6 @@
\unrestrict xxxxx
--
--- Database "regression_jsonb_plperl" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29319,10 +30153,6 @@
\unrestrict xxxxx
--
--- Database "regression_lo" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29417,10 +30247,6 @@
\unrestrict xxxxx
--
--- Database "regression_ltree" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29555,10 +30381,6 @@
\unrestrict xxxxx
--
--- Database "regression_nbtree" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29766,10 +30588,6 @@
\unrestrict xxxxx
--
--- Database "regression_pageinspect" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29850,10 +30668,6 @@
\unrestrict xxxxx
--
--- Database "regression_passwordcheck" dump
---
-
---
-- PostgreSQL database dump
--
@@ -29934,10 +30748,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_buffercache" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30032,10 +30842,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_overexplain" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30264,10 +31070,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_prewarm" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30348,10 +31150,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_surgery" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30432,10 +31230,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_trgm" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30588,10 +31382,6 @@
\unrestrict xxxxx
--
--- Database "regression_pg_visibility" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30686,10 +31476,6 @@
\unrestrict xxxxx
--
--- Database "regression_pgcrypto" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30814,10 +31600,6 @@
\unrestrict xxxxx
--
--- Database "regression_pgstattuple" dump
---
-
---
-- PostgreSQL database dump
--
@@ -30950,10 +31732,6 @@
\unrestrict xxxxx
--
--- Database "regression_plperl" dump
---
-
---
-- PostgreSQL database dump
--
@@ -32612,10 +33390,6 @@
\unrestrict xxxxx
--
--- Database "regression_plpgsql" dump
---
-
---
-- PostgreSQL database dump
--
@@ -34868,10 +35642,6 @@
\unrestrict xxxxx
--
--- Database "regression_plsample" dump
---
-
---
-- PostgreSQL database dump
--
@@ -35041,10 +35811,6 @@
\unrestrict xxxxx
--
--- Database "regression_pltcl" dump
---
-
---
-- PostgreSQL database dump
--
@@ -36772,10 +37538,6 @@
\unrestrict xxxxx
--
--- Database "regression_postgres_fdw" dump
---
-
---
-- PostgreSQL database dump
--
@@ -38758,10 +39520,6 @@
\unrestrict xxxxx
--
--- Database "regression_seg" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39059,10 +39817,6 @@
\unrestrict xxxxx
--
--- Database "regression_spgist_name_ops" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39181,10 +39935,6 @@
\unrestrict xxxxx
--
--- Database "regression_spi" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39331,10 +40081,6 @@
\unrestrict xxxxx
--
--- Database "regression_tablefunc" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39566,10 +40312,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_binaryheap" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39664,10 +40406,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_bitmapset" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39748,10 +40486,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_bloomfilter" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39846,10 +40580,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_copy_callbacks" dump
---
-
---
-- PostgreSQL database dump
--
@@ -39961,10 +40691,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_ddl_deparse" dump
---
-
---
-- PostgreSQL database dump
--
@@ -41435,10 +42161,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_dsa" dump
---
-
---
-- PostgreSQL database dump
--
@@ -41533,10 +42255,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_dsm_registry" dump
---
-
---
-- PostgreSQL database dump
--
@@ -41631,10 +42349,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_extensions" dump
---
-
---
-- PostgreSQL database dump
--
@@ -41908,10 +42622,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_ginpostinglist" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42006,10 +42716,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_integerset" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42104,10 +42810,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_lfind" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42202,10 +42904,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_parser" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42313,10 +43011,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_pg_dump" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42624,10 +43318,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_predtest" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42774,10 +43464,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_radixtree" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42872,10 +43558,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_rbtree" dump
---
-
---
-- PostgreSQL database dump
--
@@ -42970,10 +43652,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_regex" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43068,10 +43746,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_resowner" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43166,10 +43840,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_rls_hooks" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43250,10 +43920,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_shm_mq" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43348,10 +44014,6 @@
\unrestrict xxxxx
--
--- Database "regression_test_tidstore" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43446,10 +44108,6 @@
\unrestrict xxxxx
--
--- Database "regression_tsm_system_rows" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43571,10 +44229,6 @@
\unrestrict xxxxx
--
--- Database "regression_tsm_system_time" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43697,10 +44351,6 @@
\unrestrict xxxxx
--
--- Database "regression_typcache" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43797,10 +44447,6 @@
\unrestrict xxxxx
--
--- Database "regression_unaccent" dump
---
-
---
-- PostgreSQL database dump
--
@@ -43881,10 +44527,6 @@
\unrestrict xxxxx
--
--- Database "regression_xml2" dump
---
-
---
-- PostgreSQL database dump
--
@@ -44026,7 +44668,3 @@
\unrestrict xxxxx
---
--- PostgreSQL database cluster dump complete
---
-
On Tue, Jan 6, 2026 at 11:56 AM Mahendra Singh Thalor <mahi6run@gmail.com>
wrote:
We have another thread for this. We have patches also. Last year, we
planned to block these databases at creation time.It's probably harmless, we connect to the databases further down to do
actual work. But it's also not nice. The toc.glo seems to have a bunch of
extraneous entries of type COMMENT and CONNECT. Why is that? As far as
poible this should have output pretty much identical to a plain pg_dumpall.cheers
andrew
If we don't dump those comments in non-text format, then the output of
"pg_restore -f filename dump_non_text" will not be the same as the
plain dump of pg_dumpall.Here, I am attaching an updated patch for the review and testing.
Hi Mahendra,
I found a scenario in which the table is not
restored if --transaction-size switch is used at the time of pg_restore
operation
Please refer this scenario:
Case A --pg_restore operation with "--transaction-size" against the dump
(taken using pg_dump) -
create a table ( create table t(n int); )
perform pg_dump ( ./pg_dump -Ft postgres -f xyz.tar)
create a database (create database test;)
perform pg_restore using switch "--transaction-size" ( ./pg_restore
--transaction-size=1 -d test xyz.tar)
table is restored into test database
Case B --pg_restore operation with "--transaction-size" against the dump
(taken using pg_dumpall) -
create a table ( create table t(n int); )
perform pg_dumpall ( ./pg_dumpall -Ft -f abc.tar)
create a new cluster, start the server against a different port
perform pg_restore using switch "--transaction-size" (./pg_restore -Ft
--transaction-size=10 -d postgres abc.tar -p 9000 -C)
table is not restored
if i remove --transaction-size switch then this works.
regards,