pg_upgrade should truncate/remove its logs before running
I have seen this numerous times but had not dug into it, until now.
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.
I think it should either truncate the logfiles, or error early if any of the
files exist. Or it could put all its output files into a newly-created
subdirectory. Or this message could be output to the per-db logfiles, and not
just the static ones:
| "pg_upgrade run on %s".
For the per-db logfiels with OIDs in their name, changing open() from "append"
mode to truncate mode doesn't work, since they're written to in parallel.
They have to be removed/truncated in advance.
This is one possible fix. You can test its effect by deliberately breaking one
of the calls to exec_progs(), like this.
- "\"%s/pg_restore\" %s %s --exit-on-error --verbose "
+ "\"%s/pg_restore\" %s %s --exit-on-error --verboose "
Attachments:
0001-pg_upgrade-fail-if-logfiles-exist.patchtext/x-diff; charset=us-asciiDownload
From 1b1a86df1febce5f55257c625b81e1499f6bfce3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH] pg_upgrade: fail if logfiles exist..
Rather than appending to them, which is confusing since old errors are still
present.
---
src/bin/pg_upgrade/check.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 1106912863..92060e5abe 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -18,6 +18,7 @@ static void check_new_cluster_is_empty(void);
static void check_databases_are_compatible(void);
static void check_locale_and_encoding(DbInfo *olddb, DbInfo *newdb);
static bool equivalent_locale(int category, const char *loca, const char *locb);
+static void remove_logfiles(void);
static void check_is_install_user(ClusterInfo *cluster);
static void check_proper_datallowconn(ClusterInfo *cluster);
static void check_for_prepared_transactions(ClusterInfo *cluster);
@@ -168,7 +169,10 @@ check_and_dump_old_cluster(bool live_check)
* the old server is running.
*/
if (!user_opts.check)
+ {
+ remove_logfiles();
generate_old_dump();
+ }
if (!live_check)
stop_postmaster(false);
@@ -635,6 +639,24 @@ create_script_for_old_cluster_deletion(char **deletion_script_file_name)
check_ok();
}
+/*
+ * Remove any per-db logfiles left around by a previous upgrade attempt.
+ * The statically-named logfiles aren't touched, since there are clear
+ * separators added by parseCommandLine().
+ */
+static void
+remove_logfiles(void)
+{
+ int dbnum;
+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+ {
+ char log_file_name[MAXPGPATH];
+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+ snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
+ if (unlink(log_file_name) < 0 && errno != ENOENT)
+ pg_log(PG_FATAL, "failed to remove pre-existing logfile: %s: %m", log_file_name);
+ }
+}
/*
* check_is_install_user()
--
2.17.0
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
I have seen this numerous times but had not dug into it, until now.
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.I think it should either truncate the logfiles, or error early if any of the
files exist. Or it could put all its output files into a newly-created
subdirectory. Or this message could be output to the per-db logfiles, and not
just the static ones:
| "pg_upgrade run on %s".For the per-db logfiels with OIDs in their name, changing open() from "append"
mode to truncate mode doesn't work, since they're written to in parallel.
They have to be removed/truncated in advance.This is one possible fix. You can test its effect by deliberately breaking one
of the calls to exec_progs(), like this.- "\"%s/pg_restore\" %s %s --exit-on-error --verbose " + "\"%s/pg_restore\" %s %s --exit-on-error --verboose "
Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On Wed, Dec 15, 2021 at 04:09:16PM -0500, Bruce Momjian wrote:
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
I have seen this numerous times but had not dug into it, until now.
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.I think it should either truncate the logfiles, or error early if any of the
files exist. Or it could put all its output files into a newly-created
subdirectory. Or this message could be output to the per-db logfiles, and not
just the static ones:
| "pg_upgrade run on %s".For the per-db logfiels with OIDs in their name, changing open() from "append"
mode to truncate mode doesn't work, since they're written to in parallel.
They have to be removed/truncated in advance.This is one possible fix. You can test its effect by deliberately breaking one
of the calls to exec_progs(), like this.- "\"%s/pg_restore\" %s %s --exit-on-error --verbose " + "\"%s/pg_restore\" %s %s --exit-on-error --verboose "Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?
To avoid the presence of irrelevant errors from the previous invocation of
pg_upgrade.
Maybe you would prefer one of my other ideas , like "put all its output files
into a newly-created subdirectory" ?
--
Justin
Bruce Momjian <bruce@momjian.us> writes:
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.
Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?
The server emits enough information so that it's not confusing:
there are timestamps, and there's an identifiable startup line.
pg_upgrade does neither. If you don't want to truncate as
Justin suggests, you should do that instead.
Personally I like the idea of making a timestamped subdirectory
and dropping all the files in that, because the thing that most
annoys *me* about pg_upgrade is the litter it leaves behind in
$CWD. A subdirectory would make it far easier to mop up the mess.
regards, tom lane
On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:
Bruce Momjian <bruce@momjian.us> writes:
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?The server emits enough information so that it's not confusing:
there are timestamps, and there's an identifiable startup line.
pg_upgrade does neither. If you don't want to truncate as
Justin suggests, you should do that instead.Personally I like the idea of making a timestamped subdirectory
and dropping all the files in that, because the thing that most
annoys *me* about pg_upgrade is the litter it leaves behind in
$CWD. A subdirectory would make it far easier to mop up the mess.
Yes, lot of litter. Putting it in a subdirectory makes a lot of sense.
Justin, do you want to work on that patch, since you had an earlier
version to fix this?
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On 12/15/21 16:23, Bruce Momjian wrote:
On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:
Bruce Momjian <bruce@momjian.us> writes:
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?The server emits enough information so that it's not confusing:
there are timestamps, and there's an identifiable startup line.
pg_upgrade does neither. If you don't want to truncate as
Justin suggests, you should do that instead.Personally I like the idea of making a timestamped subdirectory
and dropping all the files in that, because the thing that most
annoys *me* about pg_upgrade is the litter it leaves behind in
$CWD. A subdirectory would make it far easier to mop up the mess.Yes, lot of litter. Putting it in a subdirectory makes a lot of sense.
Justin, do you want to work on that patch, since you had an earlier
version to fix this?
The directory name needs to be predictable somehow, or maybe optionally
set as a parameter. Having just a timestamped directory name would make
life annoying for a poor buildfarm maintainer. Also, please don't change
anything before I have a chance to adjust the buildfarm code to what is
going to be done.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:
On 12/15/21 16:23, Bruce Momjian wrote:
On Wed, Dec 15, 2021 at 04:17:23PM -0500, Tom Lane wrote:
Bruce Momjian <bruce@momjian.us> writes:
On Sat, Dec 11, 2021 at 08:50:17PM -0600, Justin Pryzby wrote:
If pg_upgrade fails and is re-run, it appends to its logfiles, which is
confusing since, if it fails again, it then looks like the original error
recurred and wasn't fixed. The "append" behavior dates back to 717f6d608.Uh, the database server doesn't erase its logs on crash/failure, so why
should pg_upgrade do that?The server emits enough information so that it's not confusing:
there are timestamps, and there's an identifiable startup line.
pg_upgrade does neither. If you don't want to truncate as
Justin suggests, you should do that instead.Personally I like the idea of making a timestamped subdirectory
and dropping all the files in that, because the thing that most
annoys *me* about pg_upgrade is the litter it leaves behind in
$CWD. A subdirectory would make it far easier to mop up the mess.Yes, lot of litter. Putting it in a subdirectory makes a lot of sense.
Justin, do you want to work on that patch, since you had an earlier
version to fix this?The directory name needs to be predictable somehow, or maybe optionally
set as a parameter. Having just a timestamped directory name would make
life annoying for a poor buildfarm maintainer. Also, please don't change
anything before I have a chance to adjust the buildfarm code to what is
going to be done.
Feel free to suggest the desirable behavior.
It could write to pg_upgrade.log/* and refuse to run if the dir already exists.
--
Justin
On Wed, Dec 15, 2021 at 04:13:10PM -0600, Justin Pryzby wrote:
On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:
The directory name needs to be predictable somehow, or maybe optionally
set as a parameter. Having just a timestamped directory name would make
life annoying for a poor buildfarm maintainer. Also, please don't change
anything before I have a chance to adjust the buildfarm code to what is
going to be done.Feel free to suggest the desirable behavior.
It could write to pg_upgrade.log/* and refuse to run if the dir already exists.
Andrew's point looks rather sensible to me. So, this stuff should
have a predictable name (pg_upgrade.log, pg_upgrade_log or upgrade_log
would be fine). But I would also add an option to be able to define a
custom log path. The latter would be useful for the regression tests
so as everything gets could get redirected to a path already filtered
out.
--
Michael
On 16.12.21 02:39, Michael Paquier wrote:
On Wed, Dec 15, 2021 at 04:13:10PM -0600, Justin Pryzby wrote:
On Wed, Dec 15, 2021 at 05:04:54PM -0500, Andrew Dunstan wrote:
The directory name needs to be predictable somehow, or maybe optionally
set as a parameter. Having just a timestamped directory name would make
life annoying for a poor buildfarm maintainer. Also, please don't change
anything before I have a chance to adjust the buildfarm code to what is
going to be done.Feel free to suggest the desirable behavior.
It could write to pg_upgrade.log/* and refuse to run if the dir already exists.Andrew's point looks rather sensible to me. So, this stuff should
have a predictable name (pg_upgrade.log, pg_upgrade_log or upgrade_log
would be fine). But I would also add an option to be able to define a
custom log path. The latter would be useful for the regression tests
so as everything gets could get redirected to a path already filtered
out.
Could we make it write just one log file? Is having multiple log files
better?
On 16 Dec 2021, at 12:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:
Could we make it write just one log file? Is having multiple log files better?
Having individual <checkname>.txt files from checks with additional information
on how to handle the error are quite convenient when writing wrappers around
pg_upgrade (speaking from experience of having written multiple pg_upgraade
frontends). Parsing a single logfile is more work, and will break existing
scripts.
I'm in favor of a predictable by default logpath, with a parameter to override,
as mentioned upthread.
--
Daniel Gustafsson https://vmware.com/
On Thu, Dec 16, 2021 at 12:23:08PM +0100, Daniel Gustafsson wrote:
On 16 Dec 2021, at 12:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:
Could we make it write just one log file? Is having multiple log files better?
Having individual <checkname>.txt files from checks with additional information
on how to handle the error are quite convenient when writing wrappers around
pg_upgrade (speaking from experience of having written multiple pg_upgraade
frontends). Parsing a single logfile is more work, and will break existing
scripts.I'm in favor of a predictable by default logpath, with a parameter to override,
as mentioned upthread.
I put this together in the simplest way, prefixing all the filenames with the
configured path..
Another options is to chdir() into the given path. But, pg_upgrade takes (and
requires) a bunch of other paths, like -d -D -b -B, and those are traditionally
interpretted relative to CWD. I could getcwd() and prefix all the -[dDbB] with
that, but prefixing a handful of binary/data paths is hardly better than
prefixing a handful of dump/logfile paths. I suppose that openat() isn't
portable. I don't think this it's worth prohibiting relative paths, so I can't
think of any less-naive way to do this.
I didn't move the delete-old-cluster.sh, since that's intended to stay around
even after a successful upgrade, as opposed to the other logs, which are
typically removed at that point.
--
Justin
Attachments:
0001-pg_upgrade-write-logfiles-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 7b608a415645682a9853bb4fbfde54caa6ccc137 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH] pg_upgrade: write logfiles and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending the logfiles which have pre-existing errors in them.
And allows cleaning up after pg_upgrade more easily.
---
src/bin/pg_upgrade/check.c | 10 ++---
src/bin/pg_upgrade/dump.c | 8 ++--
src/bin/pg_upgrade/exec.c | 5 ++-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 21 +++++++++--
src/bin/pg_upgrade/pg_upgrade.c | 65 +++++++++++++++++++++------------
src/bin/pg_upgrade/pg_upgrade.h | 1 +
src/bin/pg_upgrade/server.c | 4 +-
8 files changed, 77 insertions(+), 40 deletions(-)
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 1b8ef79242..9bc4e106dd 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -552,7 +552,7 @@ create_script_for_old_cluster_deletion(char **deletion_script_file_name)
prep_status("Creating script to delete old cluster");
- if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL)
+ if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL) // OK ?
pg_fatal("could not open file \"%s\": %s\n",
*deletion_script_file_name, strerror(errno));
@@ -784,7 +784,7 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -861,7 +861,7 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -960,7 +960,7 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1215,7 +1215,7 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 90060d0f8e..fe979e0df4 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,10 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f %s/%s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
- GLOBALS_DUMP_FILE);
+ log_opts.basedir, GLOBALS_DUMP_FILE);
check_ok();
prep_status("Creating dump of database schemas\n");
@@ -52,10 +52,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
- sql_file_name, escaped_connstr.data);
+ log_opts.basedir, sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
}
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index 19cc06e0c3..6b70c20cf7 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.basedir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index 627ec0ce28..9e16df3203 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 2d92294d9d..2c293205ee 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -65,6 +65,7 @@ parseCommandLine(int argc, char *argv[])
FILE *fp;
char **filename;
time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -202,8 +203,18 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
+ log_opts.basedir = getenv("PG_UPGRADE_LOGDIR");
+ if (log_opts.basedir != NULL)
+ log_opts.basedir = strdup(log_opts.basedir);
+ else
+ log_opts.basedir = "pg_upgrade_log.d";
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create log directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
@@ -211,8 +222,10 @@ parseCommandLine(int argc, char *argv[])
/* label start of upgrade in logfiles */
for (filename = output_files; *filename != NULL; filename++)
{
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
/* Start with newline because we might be appending to a file. */
fprintf(fp, "\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 3628bd74a7..9f1b17fc15 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -302,9 +302,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
- GLOBALS_DUMP_FILE);
+ log_opts.basedir, GLOBALS_DUMP_FILE);
check_ok();
}
@@ -348,10 +348,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -385,10 +386,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
}
@@ -681,32 +683,49 @@ set_frozenxids(bool minmxid_only)
static void
cleanup(void)
{
+ int dbnum;
+ char **filename;
+ char filename_path[MAXPGPATH];
+
fclose(log_opts.internal);
/* Remove dump and log files? */
- if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
+ if (log_opts.retain)
+ return;
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, *filename);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n",
+ filename_path);
+ }
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
+ /* remove dump files */
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, GLOBALS_DUMP_FILE);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+ if (old_cluster.dbarr.dbs)
+ {
+ char *start = filename_path + strlen(log_opts.basedir) + 1;
+ int remaining = sizeof(filename_path) - strlen(log_opts.basedir) - 1;
+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+ {
+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
+ snprintf(start, remaining, DB_DUMP_FILE_MASK, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
+ snprintf(start, remaining, DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+ }
}
+
+ if (rmdir(log_opts.basedir))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", log_opts.basedir);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 235a770026..4383a56afd 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -269,6 +269,7 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ char *basedir; /* Dir to create logfiles */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index 442a345173..328c19ce59 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,8 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir, log_opts.basedir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
--
2.17.0
On Fri, Dec 17, 2021 at 11:21:13AM -0600, Justin Pryzby wrote:
I put this together in the simplest way, prefixing all the filenames with the
configured path..
Well, why not.
Another options is to chdir() into the given path. But, pg_upgrade takes (and
requires) a bunch of other paths, like -d -D -b -B, and those are traditionally
interpretted relative to CWD. I could getcwd() and prefix all the -[dDbB] with
that, but prefixing a handful of binary/data paths is hardly better than
prefixing a handful of dump/logfile paths. I suppose that openat() isn't
portable. I don't think this it's worth prohibiting relative paths, so I can't
think of any less-naive way to do this.
If we add a new file, .gitignore would find about it quickly and
inform about a not-so-clean tree. I would tend to prefer your
approach, here. Relative paths can be useful.
I didn't move the delete-old-cluster.sh, since that's intended to stay around
even after a successful upgrade, as opposed to the other logs, which are
typically removed at that point.
Makes sense to me.
+ log_opts.basedir = getenv("PG_UPGRADE_LOGDIR");
+ if (log_opts.basedir != NULL)
+ log_opts.basedir = strdup(log_opts.basedir);
+ else
+ log_opts.basedir = "pg_upgrade_log.d";
Why is this controlled with an environment variable? It seems to me
that an option switch would be much better, no? While tuning things,
we could choose something simpler for the default, like
"pg_upgrade_log". I don't have a good history in naming new things,
though :)
.gitignore should be updated, I guess? Besides, this patch has no
documentation.
--
Michael
On Mon, Dec 20, 2021 at 08:21:51PM +0900, Michael Paquier wrote:
On Fri, Dec 17, 2021 at 11:21:13AM -0600, Justin Pryzby wrote:
+ log_opts.basedir = "pg_upgrade_log.d";
we could choose something simpler for the default, like
"pg_upgrade_log". I don't have a good history in naming new things,
though :)
I specifically called it .d to made it obvious that it's a dir - nearly
everything that ends in "log" is a file, so people are likely to run "rm" and
"less" on it - including myself.
.gitignore should be updated, I guess?
Are you suggesting to remove these ?
-/pg_upgrade_internal.log
-/reindex_hash.sql
-/loadable_libraries.txt
Besides, this patch has no documentation.
TBH I'm not even sure if the dir needs to be configurable ?
Attachments:
0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 447cc989018a95871209f5305308a384d6a5fb9e Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
doc/src/sgml/ref/pgupgrade.sgml | 13 +++++++
src/bin/pg_upgrade/.gitignore | 3 --
src/bin/pg_upgrade/check.c | 10 ++---
src/bin/pg_upgrade/dump.c | 8 ++--
src/bin/pg_upgrade/exec.c | 5 ++-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 23 ++++++++++--
src/bin/pg_upgrade/pg_upgrade.c | 65 +++++++++++++++++++++------------
src/bin/pg_upgrade/pg_upgrade.h | 1 +
src/bin/pg_upgrade/server.c | 4 +-
10 files changed, 92 insertions(+), 43 deletions(-)
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee98..887297317f9 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -130,6 +130,19 @@ PostgreSQL documentation
cluster</para></listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-l</option></term>
+ <term><option>--logdir</option></term>
+ <listitem><para>Directory where SQL and log files will be written.
+ The directory will be created and must not exist before calling
+ <command>pg_upgrade</command>.
+ It will be removed after a successful upgrade unless
+ <option>--retain</option> is specified.
+ The default is <literal>pg_upgrade_log.d</literal> in the current
+ working directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-N</option></term>
<term><option>--no-sync</option></term>
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa502..7222f8a6b1f 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,6 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
-/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 1b8ef79242b..9bc4e106dd3 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -552,7 +552,7 @@ create_script_for_old_cluster_deletion(char **deletion_script_file_name)
prep_status("Creating script to delete old cluster");
- if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL)
+ if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL) // OK ?
pg_fatal("could not open file \"%s\": %s\n",
*deletion_script_file_name, strerror(errno));
@@ -784,7 +784,7 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -861,7 +861,7 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -960,7 +960,7 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1215,7 +1215,7 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s", log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 90060d0f8eb..fe979e0df48 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,10 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f %s/%s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
- GLOBALS_DUMP_FILE);
+ log_opts.basedir, GLOBALS_DUMP_FILE);
check_ok();
prep_status("Creating dump of database schemas\n");
@@ -52,10 +52,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
- sql_file_name, escaped_connstr.data);
+ log_opts.basedir, sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
}
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index 19cc06e0c36..6b70c20cf78 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.basedir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index 627ec0ce28b..9e16df32033 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 66fe16964e6..e5bb3286e89 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -57,6 +57,7 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"logdir", required_argument, NULL, 'l'},
{NULL, 0, NULL, 0}
};
@@ -66,6 +67,7 @@ parseCommandLine(int argc, char *argv[])
FILE *fp;
char **filename;
time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -136,6 +138,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_LINK;
break;
+ case 'l':
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
case 'N':
user_opts.do_sync = false;
break;
@@ -208,8 +214,15 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
+ if (log_opts.basedir == NULL)
+ log_opts.basedir = "pg_upgrade_log.d";
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create log directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
@@ -217,8 +230,10 @@ parseCommandLine(int argc, char *argv[])
/* label start of upgrade in logfiles */
for (filename = output_files; *filename != NULL; filename++)
{
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
/* Start with newline because we might be appending to a file. */
fprintf(fp, "\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index f85cb2e2620..2b1373a7969 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -305,9 +305,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
- GLOBALS_DUMP_FILE);
+ log_opts.basedir, GLOBALS_DUMP_FILE);
check_ok();
}
@@ -351,10 +351,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -388,10 +389,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
}
@@ -684,32 +686,49 @@ set_frozenxids(bool minmxid_only)
static void
cleanup(void)
{
+ int dbnum;
+ char **filename;
+ char filename_path[MAXPGPATH];
+
fclose(log_opts.internal);
/* Remove dump and log files? */
- if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
+ if (log_opts.retain)
+ return;
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, *filename);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n",
+ filename_path);
+ }
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
+ /* remove dump files */
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.basedir, GLOBALS_DUMP_FILE);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+ if (old_cluster.dbarr.dbs)
+ {
+ char *start = filename_path + strlen(log_opts.basedir) + 1;
+ int remaining = sizeof(filename_path) - strlen(log_opts.basedir) - 1;
+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+ {
+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
+ snprintf(start, remaining, DB_DUMP_FILE_MASK, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
+ snprintf(start, remaining, DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+ }
}
+
+ if (rmdir(log_opts.basedir))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", log_opts.basedir);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 22169f10021..baa7613753e 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -269,6 +269,7 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ char *basedir; /* Dir to create logfiles */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index 442a345173c..328c19ce593 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,8 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir, log_opts.basedir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
--
2.17.0
On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:
On Mon, Dec 20, 2021 at 08:21:51PM +0900, Michael Paquier wrote:
we could choose something simpler for the default, like
"pg_upgrade_log". I don't have a good history in naming new things,
though :)I specifically called it .d to made it obvious that it's a dir - nearly
everything that ends in "log" is a file, so people are likely to run "rm" and
"less" on it - including myself.
Okay.
.gitignore should be updated, I guess?
Are you suggesting to remove these ?
-/pg_upgrade_internal.log
-/loadable_libraries.txt
Yep, it looks so as these are part of the logs, the second one being a
failure state.
-/reindex_hash.sql
But this one is not, no?
Besides, this patch has no documentation.
TBH I'm not even sure if the dir needs to be configurable ?
I'd think it is better to have some control on that. Not sure what
the opinion of others is on this specific point, though.
--
Michael
Michael Paquier <michael@paquier.xyz> writes:
On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:
Are you suggesting to remove these ?
-/pg_upgrade_internal.log
-/loadable_libraries.txt
Yep, it looks so as these are part of the logs, the second one being a
failure state.
-/reindex_hash.sql
But this one is not, no?
I'd like to get to a state where there's just one thing to "rm -rf"
to clean up after any pg_upgrade run. If we continue to leave the
we-suggest-you-run-these scripts loose in $CWD then we've not really
improved things much.
Perhaps there'd be merit in putting log files into an additional
subdirectory of that output directory, like
pg_upgrade_output.d/logs/foo.log, so that the more-ignorable
output files would be separated from the less-ignorable ones.
Or perhaps that's just gilding the lily.
regards, tom lane
On Wed, Dec 22, 2021 at 09:52:26AM -0500, Tom Lane wrote:
Michael Paquier <michael@paquier.xyz> writes:
On Mon, Dec 20, 2021 at 09:39:26PM -0600, Justin Pryzby wrote:
Are you suggesting to remove these ?
-/pg_upgrade_internal.log
-/loadable_libraries.txtYep, it looks so as these are part of the logs, the second one being a
failure state.-/reindex_hash.sql
But this one is not, no?
I'd like to get to a state where there's just one thing to "rm -rf"
to clean up after any pg_upgrade run. If we continue to leave the
we-suggest-you-run-these scripts loose in $CWD then we've not really
improved things much.
My patch moves reindex_hash.sql, and I'm having trouble seeing why it shouldn't
be handled in .gitignore the same way as other stuff that's moved.
But delete-old-cluster.sh is not moved, and I'm not sure how to improve on
that.
Perhaps there'd be merit in putting log files into an additional
subdirectory of that output directory, like
pg_upgrade_output.d/logs/foo.log, so that the more-ignorable
output files would be separated from the less-ignorable ones.
Or perhaps that's just gilding the lily.
In the case it's successful, everything is removed - except for the delete
script. I can see the case for separating the dumps (which are essentially
internal and of which there may be many) and the logs (same), from
the .txt error files like loadable_libraries.txt (which are user-facing).
It could also be divided with each DB having its own subdir, with a dumpfile
and a logfile.
Should the unix socket be created underneath the "output dir" ?
Should it be possible to set the output dir to "." ? That would give the
pre-existing behavior, but only if we don't use subdirs for log/ and dump/.
Attachments:
0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 8d82297d0e46b48460820ca338781d352093443a Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
doc/src/sgml/ref/pgupgrade.sgml | 15 ++++++-
src/bin/pg_upgrade/.gitignore | 3 --
src/bin/pg_upgrade/check.c | 14 +++---
src/bin/pg_upgrade/dump.c | 6 ++-
src/bin/pg_upgrade/exec.c | 5 ++-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 31 ++++++++++++--
src/bin/pg_upgrade/pg_upgrade.c | 76 +++++++++++++++++++++++----------
src/bin/pg_upgrade/pg_upgrade.h | 1 +
src/bin/pg_upgrade/server.c | 6 ++-
10 files changed, 119 insertions(+), 41 deletions(-)
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee98..e6d730a2574 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -130,6 +130,19 @@ PostgreSQL documentation
cluster</para></listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-l</option></term>
+ <term><option>--logdir</option></term>
+ <listitem><para>Directory where SQL and log files will be written.
+ The directory will be created and must not exist before calling
+ <command>pg_upgrade</command>.
+ It will be removed after a successful upgrade unless
+ <option>--retain</option> is specified.
+ The default is <literal>pg_upgrade_output.d</literal> in the current
+ working directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-N</option></term>
<term><option>--no-sync</option></term>
@@ -768,7 +781,7 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
+ as schema dumps, below the current working directory. For security, be sure
that that directory is not readable or writable by any other users.
</para>
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa502..7222f8a6b1f 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,6 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
-/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 1b8ef79242b..d422e580e07 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -552,7 +552,7 @@ create_script_for_old_cluster_deletion(char **deletion_script_file_name)
prep_status("Creating script to delete old cluster");
- if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL)
+ if ((script = fopen_priv(*deletion_script_file_name, "w")) == NULL) // OK ?
pg_fatal("could not open file \"%s\": %s\n",
*deletion_script_file_name, strerror(errno));
@@ -784,7 +784,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -861,7 +862,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -960,7 +962,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1215,7 +1218,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 90060d0f8eb..d50f71e5136 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f %s/dump/%s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/dump/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index 19cc06e0c36..aba639f0ee4 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/log/%s", log_opts.basedir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index 627ec0ce28b..9e16df32033 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 66fe16964e6..7c86bfc0569 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -57,6 +57,7 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"logdir", required_argument, NULL, 'l'},
{NULL, 0, NULL, 0}
};
@@ -66,6 +67,7 @@ parseCommandLine(int argc, char *argv[])
FILE *fp;
char **filename;
time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -136,6 +138,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_LINK;
break;
+ case 'l':
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
case 'N':
user_opts.do_sync = false;
break;
@@ -208,8 +214,23 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
+ if (log_opts.basedir == NULL)
+ log_opts.basedir = "pg_upgrade_output.d";
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/dump", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
@@ -217,8 +238,10 @@ parseCommandLine(int argc, char *argv[])
/* label start of upgrade in logfiles */
for (filename = output_files; *filename != NULL; filename++)
{
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
/* Start with newline because we might be appending to a file. */
fprintf(fp, "\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index f85cb2e2620..06e16f0b0b6 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -305,8 +305,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/dump/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -351,10 +352,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -388,10 +390,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
}
@@ -684,32 +687,61 @@ set_frozenxids(bool minmxid_only)
static void
cleanup(void)
{
+ int dbnum;
+ char **filename;
+ char filename_path[MAXPGPATH];
+
fclose(log_opts.internal);
/* Remove dump and log files? */
- if (!log_opts.retain)
+ if (log_opts.retain)
+ return;
+
+ for (filename = output_files; *filename != NULL; filename++)
{
- int dbnum;
- char **filename;
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n",
+ filename_path);
+ }
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
+ /* remove dump files */
+ snprintf(filename_path, sizeof(filename_path), "%s/dump/%s",
+ log_opts.basedir, GLOBALS_DUMP_FILE);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
+ if (old_cluster.dbarr.dbs)
+ {
+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+ {
+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/dump/" DB_DUMP_FILE_MASK,
+ log_opts.basedir, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/log/" DB_DUMP_LOG_FILE_MASK,
+ log_opts.basedir, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+ }
+ }
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/dump", log_opts.basedir);
+ if (rmdir(filename_path))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", filename_path);
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/log", log_opts.basedir);
+ if (rmdir(filename_path))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", filename_path);
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ if (rmdir(log_opts.basedir))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", log_opts.basedir);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 22169f10021..baa7613753e 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -269,6 +269,7 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ char *basedir; /* Dir to create logfiles */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index 442a345173c..120437bf0ac 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/log/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.basedir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
--
2.17.0
The cfbot was failing under windows:
| [22:07:02.159] could not create directory "pg_upgrade_output.d": File exists
It's because parseCommandLine() was called before get_restricted_token(), which
re-executes the process, and runs parseCommandLine again.
parseCommandLine already does stuff like opening logfiles, so that's where my
mkdir() is. It fails when re-run, since the re-exec doesn't call the cleanup()
path.
I fixed it by calling get_restricted_token() before parseCommandLine().
There's precedent for that in pg_regress (but the 3 other callers do it
differently).
It seems more ideal to always call get_restricted_token sooner than later, but
for now I only changed pg_upgrade. It's probably also better if
parseCommandLine() only parses the commandline, but for now I added on to the
logfile stuff that's already there.
BTW the CI integration is pretty swell. I added a few lines of debugging code
to figure out what was happening here. check world on 4 OSes is faster than
check world run locally. I rearranged cirrus.yaml to make windows run its
upgrade check first to save a few minutes.
Maybe the commandline argument should be callled something other than "logdir"
since it also outputs dumps there. But the dumps are more or less not
user-facing. But -d and -o are already used. Maybe it shouldn't be
configurable at all?
--
Justin
Attachments:
0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 740ef9422ba6cc92d3d4ae2ff23ee51e8e9891e9 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
doc/src/sgml/ref/pgupgrade.sgml | 15 ++++++-
src/bin/pg_upgrade/.gitignore | 3 --
src/bin/pg_upgrade/Makefile | 4 +-
src/bin/pg_upgrade/check.c | 12 +++--
src/bin/pg_upgrade/dump.c | 6 ++-
src/bin/pg_upgrade/exec.c | 5 ++-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 31 +++++++++++--
src/bin/pg_upgrade/pg_upgrade.c | 80 +++++++++++++++++++++++----------
src/bin/pg_upgrade/pg_upgrade.h | 1 +
src/bin/pg_upgrade/server.c | 6 ++-
11 files changed, 121 insertions(+), 45 deletions(-)
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee98..e6d730a2574 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -130,6 +130,19 @@ PostgreSQL documentation
cluster</para></listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-l</option></term>
+ <term><option>--logdir</option></term>
+ <listitem><para>Directory where SQL and log files will be written.
+ The directory will be created and must not exist before calling
+ <command>pg_upgrade</command>.
+ It will be removed after a successful upgrade unless
+ <option>--retain</option> is specified.
+ The default is <literal>pg_upgrade_output.d</literal> in the current
+ working directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-N</option></term>
<term><option>--no-sync</option></term>
@@ -768,7 +781,7 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
+ as schema dumps, below the current working directory. For security, be sure
that that directory is not readable or writable by any other users.
</para>
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa502..7222f8a6b1f 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,6 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
-/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a61..ab21cee3601 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ pg_upgrade_output.d/
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index c2f6c57782a..80229f37dec 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -784,7 +784,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -861,7 +862,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -960,7 +962,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1215,7 +1218,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01d..516f25117a9 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f %s/dump/%s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/dump/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec79..5d93d65285d 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/log/%s", log_opts.basedir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a3090708..acc97fafef0 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685e..aa4c4d61edf 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -57,6 +57,7 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"logdir", required_argument, NULL, 'l'},
{NULL, 0, NULL, 0}
};
@@ -66,6 +67,7 @@ parseCommandLine(int argc, char *argv[])
FILE *fp;
char **filename;
time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -136,6 +138,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_LINK;
break;
+ case 'l':
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
case 'N':
user_opts.do_sync = false;
break;
@@ -208,8 +214,23 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
+ if (log_opts.basedir == NULL)
+ log_opts.basedir = "pg_upgrade_output.d";
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/dump", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
@@ -217,8 +238,10 @@ parseCommandLine(int argc, char *argv[])
/* label start of upgrade in logfiles */
for (filename = output_files; *filename != NULL; filename++)
{
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
/* Start with newline because we might be appending to a file. */
fprintf(fp, "\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 68b336e0a34..eb43ab211d7 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -84,10 +84,10 @@ main(int argc, char **argv)
/* Set default restrictive mask until new cluster permissions are read */
umask(PG_MODE_MASK_OWNER);
- parseCommandLine(argc, argv);
-
get_restricted_token();
+ parseCommandLine(argc, argv);
+
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -305,8 +305,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/dump/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -351,10 +352,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -388,10 +390,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
}
@@ -684,32 +687,61 @@ set_frozenxids(bool minmxid_only)
static void
cleanup(void)
{
+ int dbnum;
+ char **filename;
+ char filename_path[MAXPGPATH];
+
fclose(log_opts.internal);
/* Remove dump and log files? */
- if (!log_opts.retain)
+ if (log_opts.retain)
+ return;
+
+ for (filename = output_files; *filename != NULL; filename++)
{
- int dbnum;
- char **filename;
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n",
+ filename_path);
+ }
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
+ /* remove dump files */
+ snprintf(filename_path, sizeof(filename_path), "%s/dump/%s",
+ log_opts.basedir, GLOBALS_DUMP_FILE);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
+ if (old_cluster.dbarr.dbs)
+ {
+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+ {
+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/dump/" DB_DUMP_FILE_MASK,
+ log_opts.basedir, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/log/" DB_DUMP_LOG_FILE_MASK,
+ log_opts.basedir, old_db->db_oid);
+ if (unlink(filename_path))
+ pg_log(PG_WARNING, "failed to unlink: %s: %m\n", filename_path);
+ }
+ }
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/dump", log_opts.basedir);
+ if (rmdir(filename_path))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", filename_path);
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
+ snprintf(filename_path, sizeof(filename_path),
+ "%s/log", log_opts.basedir);
+ if (rmdir(filename_path))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", filename_path);
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ if (rmdir(log_opts.basedir))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", log_opts.basedir);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index dd4a5437316..b06f188fe79 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -269,6 +269,7 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ char *basedir; /* Dir to create logfiles */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec8..e2581e0a897 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/log/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.basedir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
--
2.17.1
On Sat, Jan 08, 2022 at 12:48:57PM -0600, Justin Pryzby wrote:
I fixed it by calling get_restricted_token() before parseCommandLine().
There's precedent for that in pg_regress (but the 3 other callers do it
differently).It seems more ideal to always call get_restricted_token sooner than later, but
for now I only changed pg_upgrade. It's probably also better if
parseCommandLine() only parses the commandline, but for now I added on to the
logfile stuff that's already there.
Well, the routine does a bit more than just parsing the options as it
creates the directory infrastructure as well. As you say, I think
that it would be better to have the option parsing and the
loading-into-structure portions in one routine, and the creation of
the paths in a second one. So, the new contents of the patch could
just be moved in a new routine, after getting the restricted token.
Moving get_restricted_token() before or after the option parsing as
you do is not a big deal, but your patch is introducing in the
existing routine more than what's currently done there as of HEAD.
Maybe the commandline argument should be callled something other than "logdir"
since it also outputs dumps there. But the dumps are more or less not
user-facing. But -d and -o are already used. Maybe it shouldn't be
configurable at all?
If the choice of a short option becomes confusing, I'd be fine with
just a long option, but -l is fine IMO. Including the internal dumps
in the directory is fine to me, and using a subdir, as you do, makes
things more organized.
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f %s/dump/%s",
Some quotes seem to be missing here.
static void
cleanup(void)
{
+ int dbnum;
+ char **filename;
+ char filename_path[MAXPGPATH];
[...]
+ if (rmdir(filename_path))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n",
filename_path);
+
+ if (rmdir(log_opts.basedir))
+ pg_log(PG_WARNING, "failed to rmdir: %s: %m\n", log_opts.basedir);
Is it intentional to not use rmtree() here? If you put all the data
in the same directory, cleanup() gets simpler.
--
Michael
On Tue, Jan 11, 2022 at 04:41:58PM +0900, Michael Paquier wrote:
On Sat, Jan 08, 2022 at 12:48:57PM -0600, Justin Pryzby wrote:
I fixed it by calling get_restricted_token() before parseCommandLine().
There's precedent for that in pg_regress (but the 3 other callers do it
differently).It seems more ideal to always call get_restricted_token sooner than later, but
for now I only changed pg_upgrade. It's probably also better if
parseCommandLine() only parses the commandline, but for now I added on to the
logfile stuff that's already there.Well, the routine does a bit more than just parsing the options as it
creates the directory infrastructure as well. As you say, I think
that it would be better to have the option parsing and the
loading-into-structure portions in one routine, and the creation of
the paths in a second one. So, the new contents of the patch could
just be moved in a new routine, after getting the restricted token.
Moving get_restricted_token() before or after the option parsing as
you do is not a big deal, but your patch is introducing in the
existing routine more than what's currently done there as of HEAD.
I added mkdir() before the other stuff that messes with logfiles, because it
needs to happen before that.
Are you suggesting to change the pre-existing behavior of when logfiles are
created, like 0002 ?
Maybe the commandline argument should be callled something other than "logdir"
since it also outputs dumps there. But the dumps are more or less not
user-facing. But -d and -o are already used. Maybe it shouldn't be
configurable at all?If the choice of a short option becomes confusing, I'd be fine with
just a long option, but -l is fine IMO. Including the internal dumps
in the directory is fine to me, and using a subdir, as you do, makes
things more organized.- "--binary-upgrade %s -f %s", + "--binary-upgrade %s -f %s/dump/%s", Some quotes seem to be missing here.
Yes, good catch
Is it intentional to not use rmtree() here? If you put all the data
in the same directory, cleanup() gets simpler.
There's no reason not to. We created the dir, and the user didn't specify to
preserve it. It'd be their fault if they put something valuable there after
starting pg_upgrade.
--
Justin
Attachments:
0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 9b228bb1b529d0998340263dcb4238beeccd2ac9 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 11 Dec 2021 19:19:50 -0600
Subject: [PATCH 1/2] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
doc/src/sgml/ref/pgupgrade.sgml | 15 ++++++++++++-
src/bin/pg_upgrade/.gitignore | 3 ---
src/bin/pg_upgrade/Makefile | 4 +---
src/bin/pg_upgrade/check.c | 12 +++++++----
src/bin/pg_upgrade/dump.c | 6 ++++--
src/bin/pg_upgrade/exec.c | 5 ++++-
src/bin/pg_upgrade/function.c | 3 ++-
src/bin/pg_upgrade/option.c | 31 +++++++++++++++++++++++----
src/bin/pg_upgrade/pg_upgrade.c | 38 ++++++++-------------------------
src/bin/pg_upgrade/pg_upgrade.h | 1 +
src/bin/pg_upgrade/server.c | 6 ++++--
11 files changed, 74 insertions(+), 50 deletions(-)
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee98..e6d730a2574 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -130,6 +130,19 @@ PostgreSQL documentation
cluster</para></listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>-l</option></term>
+ <term><option>--logdir</option></term>
+ <listitem><para>Directory where SQL and log files will be written.
+ The directory will be created and must not exist before calling
+ <command>pg_upgrade</command>.
+ It will be removed after a successful upgrade unless
+ <option>--retain</option> is specified.
+ The default is <literal>pg_upgrade_output.d</literal> in the current
+ working directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-N</option></term>
<term><option>--no-sync</option></term>
@@ -768,7 +781,7 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
+ as schema dumps, below the current working directory. For security, be sure
that that directory is not readable or writable by any other users.
</para>
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa502..7222f8a6b1f 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,6 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
-/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a61..ab21cee3601 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ pg_upgrade_output.d/
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index c2f6c57782a..80229f37dec 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -784,7 +784,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -861,7 +862,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -960,7 +962,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1215,7 +1218,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01d..895e4bfe83d 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f \"%s/dump/%s\"",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/dump/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.basedir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec79..5d93d65285d 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/log/%s", log_opts.basedir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a3090708..acc97fafef0 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685e..aa4c4d61edf 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -57,6 +57,7 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"logdir", required_argument, NULL, 'l'},
{NULL, 0, NULL, 0}
};
@@ -66,6 +67,7 @@ parseCommandLine(int argc, char *argv[])
FILE *fp;
char **filename;
time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -136,6 +138,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_LINK;
break;
+ case 'l':
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
case 'N':
user_opts.do_sync = false;
break;
@@ -208,8 +214,23 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
+ if (log_opts.basedir == NULL)
+ log_opts.basedir = "pg_upgrade_output.d";
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/dump", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
@@ -217,8 +238,10 @@ parseCommandLine(int argc, char *argv[])
/* label start of upgrade in logfiles */
for (filename = output_files; *filename != NULL; filename++)
{
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
/* Start with newline because we might be appending to a file. */
fprintf(fp, "\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 68b336e0a34..f90cd74be0e 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -84,10 +84,10 @@ main(int argc, char **argv)
/* Set default restrictive mask until new cluster permissions are read */
umask(PG_MODE_MASK_OWNER);
- parseCommandLine(argc, argv);
-
get_restricted_token();
+ parseCommandLine(argc, argv);
+
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -305,8 +305,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/dump/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.basedir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -351,10 +352,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -388,10 +390,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/dump/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.basedir,
sql_file_name);
}
@@ -688,28 +691,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index dd4a5437316..b06f188fe79 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -269,6 +269,7 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ char *basedir; /* Dir to create logfiles */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec8..e2581e0a897 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/log/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.basedir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
--
2.17.1
0002-f-move-the-mkdir-and-logfile-stuff-to-a-new-function.patchtext/x-diff; charset=us-asciiDownload
From 37fe854259fe36b4dce0b1a824e5fa88fa064f9c Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Tue, 11 Jan 2022 13:06:32 -0600
Subject: [PATCH 2/2] f!move the mkdir and logfile stuff to a new function
instead of rearranging get_restricted_token()
---
src/bin/pg_upgrade/option.c | 37 -------------------------
src/bin/pg_upgrade/pg_upgrade.c | 48 ++++++++++++++++++++++++++++++++-
2 files changed, 47 insertions(+), 38 deletions(-)
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index aa4c4d61edf..767935d761f 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -9,7 +9,6 @@
#include "postgres_fe.h"
-#include <time.h>
#ifdef WIN32
#include <io.h>
#endif
@@ -64,10 +63,6 @@ parseCommandLine(int argc, char *argv[])
int option; /* Command line option */
int optindex = 0; /* used by getopt_long */
int os_user_effective_id;
- FILE *fp;
- char **filename;
- time_t run_time = time(NULL);
- char filename_path[MAXPGPATH];
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -217,41 +212,9 @@ parseCommandLine(int argc, char *argv[])
if (log_opts.basedir == NULL)
log_opts.basedir = "pg_upgrade_output.d";
- if (mkdir(log_opts.basedir, 00700))
- pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
-
- snprintf(filename_path, sizeof(filename_path), "%s/log", log_opts.basedir);
- if (mkdir(filename_path, 00700))
- pg_fatal("could not create directory \"%s\": %m\n", filename_path);
-
- snprintf(filename_path, sizeof(filename_path), "%s/dump", log_opts.basedir);
- if (mkdir(filename_path, 00700))
- pg_fatal("could not create directory \"%s\": %m\n", filename_path);
-
- snprintf(filename_path, sizeof(filename_path), "%s/log/%s", log_opts.basedir, INTERNAL_LOG_FILE);
- if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", filename_path);
-
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
- /* label start of upgrade in logfiles */
- for (filename = output_files; *filename != NULL; filename++)
- {
- snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
- log_opts.basedir, *filename);
- if ((fp = fopen_priv(filename_path, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
-
- /* Start with newline because we might be appending to a file. */
- fprintf(fp, "\n"
- "-----------------------------------------------------------------\n"
- " pg_upgrade run on %s"
- "-----------------------------------------------------------------\n\n",
- ctime(&run_time));
- fclose(fp);
- }
-
/* Turn off read-only mode; add prefix to PGOPTIONS? */
if (getenv("PGOPTIONS"))
{
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index f90cd74be0e..5ed491f5c23 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -37,6 +37,8 @@
#include "postgres_fe.h"
+#include <time.h>
+
#ifdef HAVE_LANGINFO_H
#include <langinfo.h>
#endif
@@ -53,6 +55,7 @@ static void prepare_new_globals(void);
static void create_new_objects(void);
static void copy_xact_xlog_xid(void);
static void set_frozenxids(bool minmxid_only);
+static void make_dirs(void);
static void setup(char *argv0, bool *live_check);
static void cleanup(void);
@@ -84,9 +87,11 @@ main(int argc, char **argv)
/* Set default restrictive mask until new cluster permissions are read */
umask(PG_MODE_MASK_OWNER);
+ parseCommandLine(argc, argv);
+
get_restricted_token();
- parseCommandLine(argc, argv);
+ make_dirs();
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -196,6 +201,47 @@ main(int argc, char **argv)
return 0;
}
+static void
+make_dirs(void)
+{
+ FILE *fp;
+ char **filename;
+ time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
+
+ if (mkdir(log_opts.basedir, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/dump", log_opts.basedir);
+ if (mkdir(filename_path, 00700))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s", log_opts.basedir, INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
+
+ /* label start of upgrade in logfiles */
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/log/%s",
+ log_opts.basedir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
+
+ /* Start with newline because we might be appending to a file. */
+ fprintf(fp, "\n"
+ "-----------------------------------------------------------------\n"
+ " pg_upgrade run on %s"
+ "-----------------------------------------------------------------\n\n",
+ ctime(&run_time));
+ fclose(fp);
+ }
+}
+
static void
setup(char *argv0, bool *live_check)
--
2.17.1
On Tue, Jan 11, 2022 at 02:03:07PM -0600, Justin Pryzby wrote:
I added mkdir() before the other stuff that messes with logfiles, because it
needs to happen before that.Are you suggesting to change the pre-existing behavior of when logfiles are
created, like 0002 ?
Yes, something like that.
There's no reason not to. We created the dir, and the user didn't specify to
preserve it. It'd be their fault if they put something valuable there after
starting pg_upgrade.
This is a path for the data internal to pg_upgrade. My take is that
the code simplifications the new option brings are more valuable than
this assumption, which I guess would unlikely happen. I may be wrong,
of course. By the way, while thinking about that, should we worry
about --logdir="."?
--
Michael
On Wed, Jan 12, 2022 at 12:59:54PM +0900, Michael Paquier wrote:
On Tue, Jan 11, 2022 at 02:03:07PM -0600, Justin Pryzby wrote:
There's no reason not to. We created the dir, and the user didn't specify to
preserve it. It'd be their fault if they put something valuable there after
starting pg_upgrade.This is a path for the data internal to pg_upgrade. My take is that
the code simplifications the new option brings are more valuable than
this assumption, which I guess would unlikely happen. I may be wrong,
of course. By the way, while thinking about that, should we worry
about --logdir="."?
I asked about that before. Right now, it'll exit(1) when mkdir fails.
I had written a patch to allow "." by skipping mkdir (or allowing it to fail if
errno == EEXIST), but it seems like an awfully bad idea to try to make that
work with rmtree().
--
Justin
On Tue, Jan 11, 2022 at 10:08:13PM -0600, Justin Pryzby wrote:
I asked about that before. Right now, it'll exit(1) when mkdir fails.
I had written a patch to allow "." by skipping mkdir (or allowing it to fail if
errno == EEXIST), but it seems like an awfully bad idea to try to make that
work with rmtree().
So, I have been poking at this patch, and found myself doing a couple
of modifications:
- Renaming of the option from --logdir to --outputdir, as this does
not include only logs. That matches also better with default value
assigned in previous patches, aka pg_upgrade_output.d.
- Convert the output directory to an absolute path when the various
directories are created, and use that for the whole run. pg_upgrade
is unlikely going to chdir(), but I don't really see why we should
just not use an absolute path all the time, set from the start.
- Add some sanity check about the path used, aka no parent reference
allowed and the output path should not be a direct parent of the
current working directory.
- Rather than assuming that "log/" and "dump/" are hardcoded in
various places, save more paths into log_opts.
I have noticed a couple of incorrect things in the docs, and some
other things. It is a bit late here, so I may have missed a couple of
things but I'll look at this stuff once again in a couple of days.
So, what do you think?
--
Michael
Attachments:
v2-0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From ff66604850b6455ae1b5134199e7bc5f7e61fed1 Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Wed, 19 Jan 2022 16:56:08 +0900
Subject: [PATCH v2] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
src/bin/pg_upgrade/.gitignore | 2 -
src/bin/pg_upgrade/Makefile | 4 +-
src/bin/pg_upgrade/check.c | 12 ++--
src/bin/pg_upgrade/dump.c | 6 +-
src/bin/pg_upgrade/exec.c | 5 +-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 29 +++------
src/bin/pg_upgrade/pg_upgrade.c | 105 ++++++++++++++++++++++++--------
src/bin/pg_upgrade/pg_upgrade.h | 8 ++-
src/bin/pg_upgrade/server.c | 6 +-
doc/src/sgml/ref/pgupgrade.sgml | 17 +++++-
11 files changed, 130 insertions(+), 67 deletions(-)
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa50..939e50db6c 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,7 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a6..e833a1c5f0 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ reindex_hash.sql pg_upgrade_output.d/
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 3d218c2ad2..019bcb6c7b 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -783,7 +783,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -860,7 +861,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -959,7 +961,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1214,7 +1217,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01..b69b4f9569 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec7..fadeea12ca 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.logdir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a309070..acc97fafef 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685..54dff9691e 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -9,7 +9,6 @@
#include "postgres_fe.h"
-#include <time.h>
#ifdef WIN32
#include <io.h>
#endif
@@ -57,16 +56,15 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"outputdir", required_argument, NULL, 2},
{NULL, 0, NULL, 0}
};
int option; /* Command line option */
int optindex = 0; /* used by getopt_long */
int os_user_effective_id;
- FILE *fp;
- char **filename;
- time_t run_time = time(NULL);
+ log_opts.basedir = DEFAULT_OUTPUTDIR;
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -198,6 +196,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_CLONE;
break;
+ case 2:
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
default:
fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
os_info.progname);
@@ -208,27 +210,9 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
-
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
- /* label start of upgrade in logfiles */
- for (filename = output_files; *filename != NULL; filename++)
- {
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
-
- /* Start with newline because we might be appending to a file. */
- fprintf(fp, "\n"
- "-----------------------------------------------------------------\n"
- " pg_upgrade run on %s"
- "-----------------------------------------------------------------\n\n",
- ctime(&run_time));
- fclose(fp);
- }
-
/* Turn off read-only mode; add prefix to PGOPTIONS? */
if (getenv("PGOPTIONS"))
{
@@ -303,6 +287,7 @@ usage(void)
printf(_(" -v, --verbose enable verbose internal logging\n"));
printf(_(" -V, --version display version information, then exit\n"));
printf(_(" --clone clone instead of copying files to new cluster\n"));
+ printf(_(" --outputdir=DIR directory to store internal files (logs, etc.)\n"));
printf(_(" -?, --help show this help, then exit\n"));
printf(_("\n"
"Before running pg_upgrade you must:\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 07defacd67..52858f9dba 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -38,6 +38,8 @@
#include "postgres_fe.h"
+#include <time.h>
+
#ifdef HAVE_LANGINFO_H
#include <langinfo.h>
#endif
@@ -54,6 +56,7 @@ static void prepare_new_globals(void);
static void create_new_objects(void);
static void copy_xact_xlog_xid(void);
static void set_frozenxids(bool minmxid_only);
+static void make_outputdirs(void);
static void setup(char *argv0, bool *live_check);
static void cleanup(void);
@@ -89,6 +92,8 @@ main(int argc, char **argv)
get_restricted_token();
+ make_outputdirs();
+
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -197,6 +202,72 @@ main(int argc, char **argv)
return 0;
}
+/*
+ * Create and assign proper permissions to the set of output directories
+ * used to store any data generated internally, filling in log_opts in
+ * the process.
+ */
+static void
+make_outputdirs(void)
+{
+ FILE *fp;
+ char **filename;
+ time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
+ char *outputpath;
+ char cwd[MAXPGPATH];
+
+ /*
+ * Check that the output directory is a sane location. It should not
+ * include any references to the parent, and if a relative path, this had
+ * better not be a direct parent.
+ */
+ outputpath = make_absolute_path(log_opts.basedir);
+ if (path_contains_parent_reference(outputpath))
+ pg_fatal("reference to parent directory not allowed\n");
+ if (!getcwd(cwd, MAXPGPATH))
+ pg_fatal("could not determine current directory\n");
+ if (path_is_prefix_of_path(outputpath, cwd))
+ pg_fatal("output directory cannot be below current working directory\n");
+
+ /* save the sanitized and absolute path, and just use it */
+ log_opts.basedir = outputpath;
+
+ log_opts.dumpdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.dumpdir, MAXPGPATH, "%s/dump", log_opts.basedir);
+ log_opts.logdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.logdir, MAXPGPATH, "%s/log", log_opts.basedir);
+
+ if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+ if (mkdir(log_opts.dumpdir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+ if (mkdir(log_opts.logdir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.logdir,
+ INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
+
+ /* label start of upgrade in logfiles */
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.logdir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
+
+ /* Start with newline because we might be appending to a file. */
+ fprintf(fp, "\n"
+ "-----------------------------------------------------------------\n"
+ " pg_upgrade run on %s"
+ "-----------------------------------------------------------------\n\n",
+ ctime(&run_time));
+ fclose(fp);
+ }
+}
+
static void
setup(char *argv0, bool *live_check)
@@ -306,8 +377,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -352,10 +424,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -389,10 +462,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
}
@@ -689,28 +763,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index da6770d0f8..7410ab6c44 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -26,11 +26,13 @@
#define GLOBALS_DUMP_FILE "pg_upgrade_dump_globals.sql"
#define DB_DUMP_FILE_MASK "pg_upgrade_dump_%u.custom"
+/* Default path to include all the files generated internally */
+#define DEFAULT_OUTPUTDIR "pg_upgrade_output.d"
+
#define DB_DUMP_LOG_FILE_MASK "pg_upgrade_dump_%u.log"
#define SERVER_LOG_FILE "pg_upgrade_server.log"
#define UTILITY_LOG_FILE "pg_upgrade_utility.log"
#define INTERNAL_LOG_FILE "pg_upgrade_internal.log"
-
extern char *output_files[];
/*
@@ -263,6 +265,10 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ /* Set of internal directories for output files */
+ char *basedir; /* Base output directory */
+ char *dumpdir; /* Dumps */
+ char *logdir; /* Log files */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec..7878d233de 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.logdir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee9..abce224a58 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -231,6 +231,18 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--outputdir=</option><replaceable>path</replaceable></option></term>
+ <listitem><para>Directory where any output file internal to
+ <command>pg_upgrade</command> is generated. This includes SQL and log
+ files. The directory is created when starting
+ <command>pg_upgrade</command>, and is be removed after a successful
+ upgrade unless <option>--retain</option> is specified. The default value
+ is <literal>pg_upgrade_output.d</literal> in the current working
+ directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
@@ -768,8 +780,9 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
- that that directory is not readable or writable by any other users.
+ as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in
+ the current working directory. The location can be controlled with
+ <option>--outputdir</option>
</para>
<para>
--
2.34.1
On Wed, Jan 19, 2022 at 05:13:18PM +0900, Michael Paquier wrote:
I have noticed a couple of incorrect things in the docs, and some
other things. It is a bit late here, so I may have missed a couple of
things but I'll look at this stuff once again in a couple of days.
And the docs failed to build..
--
Michael
Attachments:
v3-0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From b5a22e5e641930a46c74622bbf42a00b0abdeb8b Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Wed, 19 Jan 2022 19:38:24 +0900
Subject: [PATCH v3] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
src/bin/pg_upgrade/.gitignore | 2 -
src/bin/pg_upgrade/Makefile | 4 +-
src/bin/pg_upgrade/check.c | 12 ++--
src/bin/pg_upgrade/dump.c | 6 +-
src/bin/pg_upgrade/exec.c | 5 +-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 29 +++------
src/bin/pg_upgrade/pg_upgrade.c | 105 ++++++++++++++++++++++++--------
src/bin/pg_upgrade/pg_upgrade.h | 8 ++-
src/bin/pg_upgrade/server.c | 6 +-
doc/src/sgml/ref/pgupgrade.sgml | 17 +++++-
11 files changed, 130 insertions(+), 67 deletions(-)
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa50..939e50db6c 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,7 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a6..e833a1c5f0 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ reindex_hash.sql pg_upgrade_output.d/
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 3d218c2ad2..019bcb6c7b 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -783,7 +783,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -860,7 +861,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -959,7 +961,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1214,7 +1217,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01..b69b4f9569 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec7..fadeea12ca 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.logdir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a309070..acc97fafef 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685..54dff9691e 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -9,7 +9,6 @@
#include "postgres_fe.h"
-#include <time.h>
#ifdef WIN32
#include <io.h>
#endif
@@ -57,16 +56,15 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"outputdir", required_argument, NULL, 2},
{NULL, 0, NULL, 0}
};
int option; /* Command line option */
int optindex = 0; /* used by getopt_long */
int os_user_effective_id;
- FILE *fp;
- char **filename;
- time_t run_time = time(NULL);
+ log_opts.basedir = DEFAULT_OUTPUTDIR;
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -198,6 +196,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_CLONE;
break;
+ case 2:
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
default:
fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
os_info.progname);
@@ -208,27 +210,9 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
-
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
- /* label start of upgrade in logfiles */
- for (filename = output_files; *filename != NULL; filename++)
- {
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
-
- /* Start with newline because we might be appending to a file. */
- fprintf(fp, "\n"
- "-----------------------------------------------------------------\n"
- " pg_upgrade run on %s"
- "-----------------------------------------------------------------\n\n",
- ctime(&run_time));
- fclose(fp);
- }
-
/* Turn off read-only mode; add prefix to PGOPTIONS? */
if (getenv("PGOPTIONS"))
{
@@ -303,6 +287,7 @@ usage(void)
printf(_(" -v, --verbose enable verbose internal logging\n"));
printf(_(" -V, --version display version information, then exit\n"));
printf(_(" --clone clone instead of copying files to new cluster\n"));
+ printf(_(" --outputdir=DIR directory to store internal files (logs, etc.)\n"));
printf(_(" -?, --help show this help, then exit\n"));
printf(_("\n"
"Before running pg_upgrade you must:\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 07defacd67..52858f9dba 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -38,6 +38,8 @@
#include "postgres_fe.h"
+#include <time.h>
+
#ifdef HAVE_LANGINFO_H
#include <langinfo.h>
#endif
@@ -54,6 +56,7 @@ static void prepare_new_globals(void);
static void create_new_objects(void);
static void copy_xact_xlog_xid(void);
static void set_frozenxids(bool minmxid_only);
+static void make_outputdirs(void);
static void setup(char *argv0, bool *live_check);
static void cleanup(void);
@@ -89,6 +92,8 @@ main(int argc, char **argv)
get_restricted_token();
+ make_outputdirs();
+
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -197,6 +202,72 @@ main(int argc, char **argv)
return 0;
}
+/*
+ * Create and assign proper permissions to the set of output directories
+ * used to store any data generated internally, filling in log_opts in
+ * the process.
+ */
+static void
+make_outputdirs(void)
+{
+ FILE *fp;
+ char **filename;
+ time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
+ char *outputpath;
+ char cwd[MAXPGPATH];
+
+ /*
+ * Check that the output directory is a sane location. It should not
+ * include any references to the parent, and if a relative path, this had
+ * better not be a direct parent.
+ */
+ outputpath = make_absolute_path(log_opts.basedir);
+ if (path_contains_parent_reference(outputpath))
+ pg_fatal("reference to parent directory not allowed\n");
+ if (!getcwd(cwd, MAXPGPATH))
+ pg_fatal("could not determine current directory\n");
+ if (path_is_prefix_of_path(outputpath, cwd))
+ pg_fatal("output directory cannot be below current working directory\n");
+
+ /* save the sanitized and absolute path, and just use it */
+ log_opts.basedir = outputpath;
+
+ log_opts.dumpdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.dumpdir, MAXPGPATH, "%s/dump", log_opts.basedir);
+ log_opts.logdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.logdir, MAXPGPATH, "%s/log", log_opts.basedir);
+
+ if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+ if (mkdir(log_opts.dumpdir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+ if (mkdir(log_opts.logdir, S_IRWXU | S_IRWXG | S_IRWXO))
+ pg_fatal("could not create directory \"%s\": %m\n", filename_path);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.logdir,
+ INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
+
+ /* label start of upgrade in logfiles */
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.logdir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
+
+ /* Start with newline because we might be appending to a file. */
+ fprintf(fp, "\n"
+ "-----------------------------------------------------------------\n"
+ " pg_upgrade run on %s"
+ "-----------------------------------------------------------------\n\n",
+ ctime(&run_time));
+ fclose(fp);
+ }
+}
+
static void
setup(char *argv0, bool *live_check)
@@ -306,8 +377,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -352,10 +424,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -389,10 +462,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
}
@@ -689,28 +763,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index da6770d0f8..7410ab6c44 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -26,11 +26,13 @@
#define GLOBALS_DUMP_FILE "pg_upgrade_dump_globals.sql"
#define DB_DUMP_FILE_MASK "pg_upgrade_dump_%u.custom"
+/* Default path to include all the files generated internally */
+#define DEFAULT_OUTPUTDIR "pg_upgrade_output.d"
+
#define DB_DUMP_LOG_FILE_MASK "pg_upgrade_dump_%u.log"
#define SERVER_LOG_FILE "pg_upgrade_server.log"
#define UTILITY_LOG_FILE "pg_upgrade_utility.log"
#define INTERNAL_LOG_FILE "pg_upgrade_internal.log"
-
extern char *output_files[];
/*
@@ -263,6 +265,10 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ /* Set of internal directories for output files */
+ char *basedir; /* Base output directory */
+ char *dumpdir; /* Dumps */
+ char *logdir; /* Log files */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec..7878d233de 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.logdir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee9..b0cfacbf9c 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -231,6 +231,18 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--outputdir=</option><replaceable>path</replaceable></term>
+ <listitem><para>Directory where any output file internal to
+ <command>pg_upgrade</command> is generated. This includes SQL and log
+ files. The directory is created when starting
+ <command>pg_upgrade</command>, and is be removed after a successful
+ upgrade unless <option>--retain</option> is specified. The default value
+ is <literal>pg_upgrade_output.d</literal> in the current working
+ directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
@@ -768,8 +780,9 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
- that that directory is not readable or writable by any other users.
+ as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in
+ the current working directory. The location can be controlled with
+ <option>--outputdir</option>
</para>
<para>
--
2.34.1
On Wed, Jan 19, 2022 at 05:13:18PM +0900, Michael Paquier wrote:
On Tue, Jan 11, 2022 at 10:08:13PM -0600, Justin Pryzby wrote:
I asked about that before. Right now, it'll exit(1) when mkdir fails.
I had written a patch to allow "." by skipping mkdir (or allowing it to fail if
errno == EEXIST), but it seems like an awfully bad idea to try to make that
work with rmtree().
I still don't know if it even needs to be configurable.
- Add some sanity check about the path used, aka no parent reference
allowed and the output path should not be a direct parent of the
current working directory.
I'm not sure these restrictions are needed ?
+ outputpath = make_absolute_path(log_opts.basedir);
+ if (path_contains_parent_reference(outputpath))
+ pg_fatal("reference to parent directory not allowed\n");
Besides, you're passing the wrong path here.
I have noticed a couple of incorrect things in the docs, and some
other things. It is a bit late here, so I may have missed a couple of
things but I'll look at this stuff once again in a couple of days.
+ <command>pg_upgrade</command>, and is be removed after a successful
remove "be"
+ if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))
S_IRWXG | S_IRWXO are useless due to the umask, right ?
Maybe use PG_DIR_MODE_OWNER ?
+ if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO)) + pg_fatal("could not create directory \"%s\": %m\n", filename_path); + if (mkdir(log_opts.dumpdir, S_IRWXU | S_IRWXG | S_IRWXO)) + pg_fatal("could not create directory \"%s\": %m\n", filename_path); + if (mkdir(log_opts.logdir, S_IRWXU | S_IRWXG | S_IRWXO)) + pg_fatal("could not create directory \"%s\": %m\n", filename_path);
You're printing the wrong var. filename_path is not initialized.
--
Justin
On Wed, Jan 19, 2022 at 06:05:40PM -0600, Justin Pryzby wrote:
I still don't know if it even needs to be configurable.
I want this to be configurable to ease the switch of the pg_upgrade to
TAP, moving the logs into a deterministic temporary location proper to
each run. This makes reporting much easier on failure, with
repeatable tests, and that's why I began poking at this patch first.
I'm not sure these restrictions are needed ?
This could lead to issues with rmtree() if we are not careful enough,
no? We'd had our deal of argument injections with pg_upgrade commands
in the past (fcd15f1).
+ outputpath = make_absolute_path(log_opts.basedir); + if (path_contains_parent_reference(outputpath)) + pg_fatal("reference to parent directory not allowed\n");Besides, you're passing the wrong path here.
What would you suggest? I was just looking at that again this
morning, and splitted the logic into two parts for the absolute and
relative path cases, preventing all cases like that, which would be
weird, anyway:
../
../popo
.././
././
/direct/path/to/cwd/
/direct/path/../path/to/cwd/
+ <command>pg_upgrade</command>, and is be removed after a successful
remove "be"
Fixed.
+ if (mkdir(log_opts.basedir, S_IRWXU | S_IRWXG | S_IRWXO))
S_IRWXG | S_IRWXO are useless due to the umask, right ?
Maybe use PG_DIR_MODE_OWNER ?
Hmm. We could just use pg_dir_create_mode, then. See pg_rewind, as
one example. This opens the door for something pluggable to
SetDataDirectoryCreatePerm(), though the original use is kind of
different with data folders.
You're printing the wrong var. filename_path is not initialized.
Ugh.
--
Michael
Attachments:
v4-0001-pg_upgrade-write-log-files-and-dumps-in-subdir.patchtext/x-diff; charset=us-asciiDownload
From 2efd3252e189578f865eb54d076596387494585c Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Thu, 20 Jan 2022 11:58:59 +0900
Subject: [PATCH v4] pg_upgrade: write log files and dumps in subdir..
The subdir must not exist when pg_upgrade is started.
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
src/bin/pg_upgrade/.gitignore | 2 -
src/bin/pg_upgrade/Makefile | 4 +-
src/bin/pg_upgrade/check.c | 12 ++--
src/bin/pg_upgrade/dump.c | 6 +-
src/bin/pg_upgrade/exec.c | 5 +-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 29 ++------
src/bin/pg_upgrade/pg_upgrade.c | 120 +++++++++++++++++++++++++-------
src/bin/pg_upgrade/pg_upgrade.h | 7 ++
src/bin/pg_upgrade/server.c | 6 +-
doc/src/sgml/ref/pgupgrade.sgml | 17 ++++-
11 files changed, 145 insertions(+), 66 deletions(-)
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa50..939e50db6c 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,7 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a6..e833a1c5f0 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ reindex_hash.sql pg_upgrade_output.d/
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 3d218c2ad2..019bcb6c7b 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -783,7 +783,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -860,7 +861,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -959,7 +961,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1214,7 +1217,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01..b69b4f9569 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec7..fadeea12ca 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.logdir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a309070..acc97fafef 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685..54dff9691e 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -9,7 +9,6 @@
#include "postgres_fe.h"
-#include <time.h>
#ifdef WIN32
#include <io.h>
#endif
@@ -57,16 +56,15 @@ parseCommandLine(int argc, char *argv[])
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
+ {"outputdir", required_argument, NULL, 2},
{NULL, 0, NULL, 0}
};
int option; /* Command line option */
int optindex = 0; /* used by getopt_long */
int os_user_effective_id;
- FILE *fp;
- char **filename;
- time_t run_time = time(NULL);
+ log_opts.basedir = DEFAULT_OUTPUTDIR;
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -198,6 +196,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.transfer_mode = TRANSFER_MODE_CLONE;
break;
+ case 2:
+ log_opts.basedir = pg_strdup(optarg);
+ break;
+
default:
fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
os_info.progname);
@@ -208,27 +210,9 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
-
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
- /* label start of upgrade in logfiles */
- for (filename = output_files; *filename != NULL; filename++)
- {
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
-
- /* Start with newline because we might be appending to a file. */
- fprintf(fp, "\n"
- "-----------------------------------------------------------------\n"
- " pg_upgrade run on %s"
- "-----------------------------------------------------------------\n\n",
- ctime(&run_time));
- fclose(fp);
- }
-
/* Turn off read-only mode; add prefix to PGOPTIONS? */
if (getenv("PGOPTIONS"))
{
@@ -303,6 +287,7 @@ usage(void)
printf(_(" -v, --verbose enable verbose internal logging\n"));
printf(_(" -V, --version display version information, then exit\n"));
printf(_(" --clone clone instead of copying files to new cluster\n"));
+ printf(_(" --outputdir=DIR directory to store internal files (logs, etc.)\n"));
printf(_(" -?, --help show this help, then exit\n"));
printf(_("\n"
"Before running pg_upgrade you must:\n"
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 07defacd67..84b05e6e37 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -38,6 +38,8 @@
#include "postgres_fe.h"
+#include <time.h>
+
#ifdef HAVE_LANGINFO_H
#include <langinfo.h>
#endif
@@ -54,6 +56,7 @@ static void prepare_new_globals(void);
static void create_new_objects(void);
static void copy_xact_xlog_xid(void);
static void set_frozenxids(bool minmxid_only);
+static void make_outputdirs(void);
static void setup(char *argv0, bool *live_check);
static void cleanup(void);
@@ -89,6 +92,8 @@ main(int argc, char **argv)
get_restricted_token();
+ make_outputdirs();
+
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
@@ -197,6 +202,87 @@ main(int argc, char **argv)
return 0;
}
+/*
+ * Create and assign proper permissions to the set of output directories
+ * used to store any data generated internally, filling in log_opts in
+ * the process.
+ */
+static void
+make_outputdirs(void)
+{
+ FILE *fp;
+ char **filename;
+ time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
+ char canonicalized_basedir[MAXPGPATH];
+
+ /*
+ * Check that the output directory is a sane location. It should not
+ * include any references to the parent, and if a relative path, this had
+ * better not be a direct parent.
+ */
+ strlcpy(canonicalized_basedir, log_opts.basedir, MAXPGPATH);
+ canonicalize_path(canonicalized_basedir);
+ if (is_absolute_path(canonicalized_basedir))
+ {
+ char cwd[MAXPGPATH];
+
+ if (path_contains_parent_reference(canonicalized_basedir))
+ pg_fatal("reference to parent directory not allowed for output directory\n");
+
+ if (!getcwd(cwd, MAXPGPATH))
+ pg_fatal("could not determine current directory\n");
+
+ if (path_is_prefix_of_path(canonicalized_basedir, cwd))
+ pg_fatal("output directory cannot be direct parent of working directory\n");
+ }
+ else
+ {
+ if (!path_is_relative_and_below_cwd(canonicalized_basedir))
+ pg_fatal("output directory must be in or below the current directory\n");
+
+ if (strcmp(canonicalized_basedir, ".") == 0)
+ pg_fatal("output directory cannot be the current working directory\n");
+ }
+
+ /* save a sanitized absolute path for the whole run */
+ log_opts.basedir = make_absolute_path(log_opts.basedir);
+
+ log_opts.dumpdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.dumpdir, MAXPGPATH, "%s/dump", log_opts.basedir);
+ log_opts.logdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.logdir, MAXPGPATH, "%s/log", log_opts.basedir);
+
+ if (mkdir(log_opts.basedir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+ if (mkdir(log_opts.dumpdir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.dumpdir);
+ if (mkdir(log_opts.logdir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.logdir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.logdir,
+ INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
+
+ /* label start of upgrade in logfiles */
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.logdir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
+
+ /* Start with newline because we might be appending to a file. */
+ fprintf(fp, "\n"
+ "-----------------------------------------------------------------\n"
+ " pg_upgrade run on %s"
+ "-----------------------------------------------------------------\n\n",
+ ctime(&run_time));
+ fclose(fp);
+ }
+}
+
static void
setup(char *argv0, bool *live_check)
@@ -306,8 +392,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -352,10 +439,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -389,10 +477,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
}
@@ -689,28 +778,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index da6770d0f8..28ca82e4cb 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -26,6 +26,9 @@
#define GLOBALS_DUMP_FILE "pg_upgrade_dump_globals.sql"
#define DB_DUMP_FILE_MASK "pg_upgrade_dump_%u.custom"
+/* Default path to include all the files generated internally */
+#define DEFAULT_OUTPUTDIR "pg_upgrade_output.d"
+
#define DB_DUMP_LOG_FILE_MASK "pg_upgrade_dump_%u.log"
#define SERVER_LOG_FILE "pg_upgrade_server.log"
#define UTILITY_LOG_FILE "pg_upgrade_utility.log"
@@ -263,6 +266,10 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ /* Set of internal directories for output files */
+ char *basedir; /* Base output directory */
+ char *dumpdir; /* Dumps */
+ char *logdir; /* Log files */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec..7878d233de 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.logdir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee9..109382e782 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -231,6 +231,18 @@ PostgreSQL documentation
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><option>--outputdir=</option><replaceable>path</replaceable></term>
+ <listitem><para>Directory where any output file internal to
+ <command>pg_upgrade</command> is generated. This includes SQL and log
+ files. The directory is created when starting
+ <command>pg_upgrade</command>, and is removed after a successful
+ upgrade unless <option>--retain</option> is specified. The default value
+ is <literal>pg_upgrade_output.d</literal> in the current working
+ directory.
+ </para></listitem>
+ </varlistentry>
+
<varlistentry>
<term><option>-?</option></term>
<term><option>--help</option></term>
@@ -768,8 +780,9 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
- that that directory is not readable or writable by any other users.
+ as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in
+ the current working directory. The location can be controlled with
+ <option>--outputdir</option>
</para>
<para>
--
2.34.1
On Thu, Jan 20, 2022 at 12:01:29PM +0900, Michael Paquier wrote:
On Wed, Jan 19, 2022 at 06:05:40PM -0600, Justin Pryzby wrote:
I'm not sure these restrictions are needed ?
This could lead to issues with rmtree() if we are not careful enough,
no? We'd had our deal of argument injections with pg_upgrade commands
in the past (fcd15f1).
We require that the dir not exist, by testing if (mkdir()).
So it's okay if someone specifies ../whatever or $CWD.
--
Justin
On Wed, Jan 19, 2022 at 09:59:14PM -0600, Justin Pryzby wrote:
We require that the dir not exist, by testing if (mkdir()).
So it's okay if someone specifies ../whatever or $CWD.
What I am scared of here is the use of rmtree() if we allow something
like that. So we should either keep the removal code in its original
shape and allow such cases, or restrict the output path. At the end,
something has to change. My points are in favor of the latter because
I don't really see anybody doing the former. You favor the former.
Now, we are not talking about a lot of code for any of these, anyway.
Perhaps we'd better wait for more opinions.
--
Michael
On 19.01.22 09:13, Michael Paquier wrote:
- Renaming of the option from --logdir to --outputdir, as this does
not include only logs. That matches also better with default value
assigned in previous patches, aka pg_upgrade_output.d.
I'm afraid that is too easily confused with the target directory.
Generally, a tool processes data from input to output or from source to
target or something like that, whereas a log is more clearly something
separate from this main processing stream. The desired "output" of
pg_upgrade is the upgraded cluster, after all.
A wildcard idea is to put the log output into the target cluster.
On Thu, Jan 20, 2022 at 10:31:15AM +0100, Peter Eisentraut wrote:
I'm afraid that is too easily confused with the target directory. Generally,
a tool processes data from input to output or from source to target or
something like that, whereas a log is more clearly something separate from
this main processing stream. The desired "output" of pg_upgrade is the
upgraded cluster, after all.A wildcard idea is to put the log output into the target cluster.
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.
--
Michael
On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.
I have been looking at this idea, and the result is quite nice, being
simpler than anything that has been proposed on this thread yet. We
get a simpler removal logic, and there is no need to perform any kind
of sanity checks with the output path provided as long as we generate
the paths and the dirs after adjust_data_dir().
Thoughts?
--
Michael
Attachments:
v5-0001-pg_upgrade-write-log-files-and-dumps-in-subdir-fr.patchtext/plain; charset=us-asciiDownload
From 137bb3fb6632c4d5f0966f14410aca78236f143b Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Mon, 24 Jan 2022 10:58:48 +0900
Subject: [PATCH v5] pg_upgrade: write log files and dumps in subdir from new
cluster
This avoids appending to logfiles which have pre-existing errors in them.
And allows more easily cleaning up after pg_upgrade.
---
src/bin/pg_upgrade/.gitignore | 2 -
src/bin/pg_upgrade/Makefile | 4 +-
src/bin/pg_upgrade/check.c | 12 +++--
src/bin/pg_upgrade/dump.c | 6 ++-
src/bin/pg_upgrade/exec.c | 5 +-
src/bin/pg_upgrade/function.c | 3 +-
src/bin/pg_upgrade/option.c | 22 --------
src/bin/pg_upgrade/pg_upgrade.c | 93 +++++++++++++++++++++++----------
src/bin/pg_upgrade/pg_upgrade.h | 12 +++++
src/bin/pg_upgrade/server.c | 6 ++-
doc/src/sgml/ref/pgupgrade.sgml | 4 +-
11 files changed, 103 insertions(+), 66 deletions(-)
diff --git a/src/bin/pg_upgrade/.gitignore b/src/bin/pg_upgrade/.gitignore
index 2d3bfeaa50..939e50db6c 100644
--- a/src/bin/pg_upgrade/.gitignore
+++ b/src/bin/pg_upgrade/.gitignore
@@ -1,9 +1,7 @@
/pg_upgrade
# Generated by test suite
-/pg_upgrade_internal.log
/delete_old_cluster.sh
/delete_old_cluster.bat
/reindex_hash.sql
-/loadable_libraries.txt
/log/
/tmp_check/
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 44d06be5a6..49b94f0ac7 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -45,9 +45,7 @@ uninstall:
clean distclean maintainer-clean:
rm -f pg_upgrade$(X) $(OBJS)
rm -rf delete_old_cluster.sh log/ tmp_check/ \
- loadable_libraries.txt reindex_hash.sql \
- pg_upgrade_dump_globals.sql \
- pg_upgrade_dump_*.custom pg_upgrade_*.log
+ reindex_hash.sql
# When $(MAKE) is present, make automatically infers that this is a
# recursive make. which is not actually what we want here, as that
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 3d218c2ad2..019bcb6c7b 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -783,7 +783,8 @@ check_for_isn_and_int8_passing_mismatch(ClusterInfo *cluster)
return;
}
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"contrib_isn_and_int8_pass_by_value.txt");
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
@@ -860,7 +861,8 @@ check_for_user_defined_postfix_ops(ClusterInfo *cluster)
prep_status("Checking for user-defined postfix operators");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"postfix_ops.txt");
/* Find any user defined postfix operators */
@@ -959,7 +961,8 @@ check_for_tables_with_oids(ClusterInfo *cluster)
prep_status("Checking for tables WITH OIDS");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"tables_with_oids.txt");
/* Find any tables declared WITH OIDS */
@@ -1214,7 +1217,8 @@ check_for_user_defined_encoding_conversions(ClusterInfo *cluster)
prep_status("Checking for user-defined encoding conversions");
- snprintf(output_path, sizeof(output_path),
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir,
"encoding_conversions.txt");
/* Find any user defined encoding conversions */
diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 953873fa01..b69b4f9569 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,9 +22,10 @@ generate_old_dump(void)
/* run new pg_dumpall binary for globals */
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
"\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
- "--binary-upgrade %s -f %s",
+ "--binary-upgrade %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
@@ -52,9 +53,10 @@ generate_old_dump(void)
parallel_exec_prog(log_file_name, NULL,
"\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
- "--binary-upgrade --format=custom %s --file=\"%s\" %s",
+ "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
new_cluster.bindir, cluster_conn_opts(&old_cluster),
log_opts.verbose ? "--verbose" : "",
+ log_opts.dumpdir,
sql_file_name, escaped_connstr.data);
termPQExpBuffer(&escaped_connstr);
diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c
index c0cbb1bec7..fadeea12ca 100644
--- a/src/bin/pg_upgrade/exec.c
+++ b/src/bin/pg_upgrade/exec.c
@@ -78,11 +78,12 @@ get_bin_version(ClusterInfo *cluster)
* The code requires it be called first from the primary thread on Windows.
*/
bool
-exec_prog(const char *log_file, const char *opt_log_file,
+exec_prog(const char *log_filename, const char *opt_log_file,
bool report_error, bool exit_on_error, const char *fmt,...)
{
int result = 0;
int written;
+ char log_file[MAXPGPATH];
#define MAXCMDLEN (2 * MAXPGPATH)
char cmd[MAXCMDLEN];
@@ -97,6 +98,8 @@ exec_prog(const char *log_file, const char *opt_log_file,
mainThreadId = GetCurrentThreadId();
#endif
+ snprintf(log_file, MAXPGPATH, "%s/%s", log_opts.logdir, log_filename);
+
written = 0;
va_start(ap, fmt);
written += vsnprintf(cmd + written, MAXCMDLEN - written, fmt, ap);
diff --git a/src/bin/pg_upgrade/function.c b/src/bin/pg_upgrade/function.c
index e73a309070..acc97fafef 100644
--- a/src/bin/pg_upgrade/function.c
+++ b/src/bin/pg_upgrade/function.c
@@ -128,7 +128,8 @@ check_loadable_libraries(void)
prep_status("Checking for presence of required libraries");
- snprintf(output_path, sizeof(output_path), "loadable_libraries.txt");
+ snprintf(output_path, sizeof(output_path), "%s/%s",
+ log_opts.basedir, "loadable_libraries.txt");
/*
* Now we want to sort the library names into order. This avoids multiple
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index c894fdd685..d2c82cc2bb 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -9,7 +9,6 @@
#include "postgres_fe.h"
-#include <time.h>
#ifdef WIN32
#include <io.h>
#endif
@@ -63,9 +62,6 @@ parseCommandLine(int argc, char *argv[])
int option; /* Command line option */
int optindex = 0; /* used by getopt_long */
int os_user_effective_id;
- FILE *fp;
- char **filename;
- time_t run_time = time(NULL);
user_opts.do_sync = true;
user_opts.transfer_mode = TRANSFER_MODE_COPY;
@@ -208,27 +204,9 @@ parseCommandLine(int argc, char *argv[])
if (optind < argc)
pg_fatal("too many command-line arguments (first is \"%s\")\n", argv[optind]);
- if ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, "a")) == NULL)
- pg_fatal("could not open log file \"%s\": %m\n", INTERNAL_LOG_FILE);
-
if (log_opts.verbose)
pg_log(PG_REPORT, "Running in verbose mode\n");
- /* label start of upgrade in logfiles */
- for (filename = output_files; *filename != NULL; filename++)
- {
- if ((fp = fopen_priv(*filename, "a")) == NULL)
- pg_fatal("could not write to log file \"%s\": %m\n", *filename);
-
- /* Start with newline because we might be appending to a file. */
- fprintf(fp, "\n"
- "-----------------------------------------------------------------\n"
- " pg_upgrade run on %s"
- "-----------------------------------------------------------------\n\n",
- ctime(&run_time));
- fclose(fp);
- }
-
/* Turn off read-only mode; add prefix to PGOPTIONS? */
if (getenv("PGOPTIONS"))
{
diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c
index 07defacd67..2582e10707 100644
--- a/src/bin/pg_upgrade/pg_upgrade.c
+++ b/src/bin/pg_upgrade/pg_upgrade.c
@@ -38,6 +38,8 @@
#include "postgres_fe.h"
+#include <time.h>
+
#ifdef HAVE_LANGINFO_H
#include <langinfo.h>
#endif
@@ -54,6 +56,7 @@ static void prepare_new_globals(void);
static void create_new_objects(void);
static void copy_xact_xlog_xid(void);
static void set_frozenxids(bool minmxid_only);
+static void make_outputdirs(char *pgdata);
static void setup(char *argv0, bool *live_check);
static void cleanup(void);
@@ -92,6 +95,12 @@ main(int argc, char **argv)
adjust_data_dir(&old_cluster);
adjust_data_dir(&new_cluster);
+ /*
+ * This needs to happen after adjusting the data directory of the
+ * new cluster in adjust_data_dir().
+ */
+ make_outputdirs(new_cluster.pgdata);
+
setup(argv[0], &live_check);
output_check_banner(live_check);
@@ -197,6 +206,56 @@ main(int argc, char **argv)
return 0;
}
+/*
+ * Create and assign proper permissions to the set of output directories
+ * used to store any data generated internally, filling in log_opts in
+ * the process.
+ */
+static void
+make_outputdirs(char *pgdata)
+{
+ FILE *fp;
+ char **filename;
+ time_t run_time = time(NULL);
+ char filename_path[MAXPGPATH];
+
+ log_opts.basedir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.basedir, MAXPGPATH, "%s/%s", pgdata, BASE_OUTPUTDIR);
+ log_opts.dumpdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.dumpdir, MAXPGPATH, "%s/%s", pgdata, LOG_OUTPUTDIR);
+ log_opts.logdir = (char *) pg_malloc(MAXPGPATH);
+ snprintf(log_opts.logdir, MAXPGPATH, "%s/%s", pgdata, DUMP_OUTPUTDIR);
+
+ if (mkdir(log_opts.basedir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.basedir);
+ if (mkdir(log_opts.dumpdir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.dumpdir);
+ if (mkdir(log_opts.logdir, pg_dir_create_mode))
+ pg_fatal("could not create directory \"%s\": %m\n", log_opts.logdir);
+
+ snprintf(filename_path, sizeof(filename_path), "%s/%s", log_opts.logdir,
+ INTERNAL_LOG_FILE);
+ if ((log_opts.internal = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not open log file \"%s\": %m\n", filename_path);
+
+ /* label start of upgrade in logfiles */
+ for (filename = output_files; *filename != NULL; filename++)
+ {
+ snprintf(filename_path, sizeof(filename_path), "%s/%s",
+ log_opts.logdir, *filename);
+ if ((fp = fopen_priv(filename_path, "a")) == NULL)
+ pg_fatal("could not write to log file \"%s\": %m\n", filename_path);
+
+ /* Start with newline because we might be appending to a file. */
+ fprintf(fp, "\n"
+ "-----------------------------------------------------------------\n"
+ " pg_upgrade run on %s"
+ "-----------------------------------------------------------------\n\n",
+ ctime(&run_time));
+ fclose(fp);
+ }
+}
+
static void
setup(char *argv0, bool *live_check)
@@ -306,8 +365,9 @@ prepare_new_globals(void)
prep_status("Restoring global objects in the new cluster");
exec_prog(UTILITY_LOG_FILE, NULL, true, true,
- "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s\"",
+ "\"%s/psql\" " EXEC_PSQL_ARGS " %s -f \"%s/%s\"",
new_cluster.bindir, cluster_conn_opts(&new_cluster),
+ log_opts.dumpdir,
GLOBALS_DUMP_FILE);
check_ok();
}
@@ -352,10 +412,11 @@ create_new_objects(void)
true,
true,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname postgres \"%s\"",
+ "--dbname postgres \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
break; /* done once we've processed template1 */
@@ -389,10 +450,11 @@ create_new_objects(void)
parallel_exec_prog(log_file_name,
NULL,
"\"%s/pg_restore\" %s %s --exit-on-error --verbose "
- "--dbname template1 \"%s\"",
+ "--dbname template1 \"%s/%s\"",
new_cluster.bindir,
cluster_conn_opts(&new_cluster),
create_opts,
+ log_opts.dumpdir,
sql_file_name);
}
@@ -689,28 +751,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index da6770d0f8..f07f56eda4 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -26,6 +26,14 @@
#define GLOBALS_DUMP_FILE "pg_upgrade_dump_globals.sql"
#define DB_DUMP_FILE_MASK "pg_upgrade_dump_%u.custom"
+/*
+ * Base directories that include all the files generated internally,
+ * from the root path of the new cluster.
+ */
+#define BASE_OUTPUTDIR "pg_upgrade_output.d"
+#define LOG_OUTPUTDIR BASE_OUTPUTDIR "/log"
+#define DUMP_OUTPUTDIR BASE_OUTPUTDIR "/dump"
+
#define DB_DUMP_LOG_FILE_MASK "pg_upgrade_dump_%u.log"
#define SERVER_LOG_FILE "pg_upgrade_server.log"
#define UTILITY_LOG_FILE "pg_upgrade_utility.log"
@@ -263,6 +271,10 @@ typedef struct
FILE *internal; /* internal log FILE */
bool verbose; /* true -> be verbose in messages */
bool retain; /* retain log files on success */
+ /* Set of internal directories for output files */
+ char *basedir; /* Base output directory */
+ char *dumpdir; /* Dumps */
+ char *logdir; /* Log files */
} LogOpts;
diff --git a/src/bin/pg_upgrade/server.c b/src/bin/pg_upgrade/server.c
index f5d4656dec..7878d233de 100644
--- a/src/bin/pg_upgrade/server.c
+++ b/src/bin/pg_upgrade/server.c
@@ -238,8 +238,10 @@ start_postmaster(ClusterInfo *cluster, bool report_and_exit_on_error)
* vacuumdb --freeze actually freezes the tuples.
*/
snprintf(cmd, sizeof(cmd),
- "\"%s/pg_ctl\" -w -l \"%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
- cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
+ "\"%s/pg_ctl\" -w -l \"%s/%s\" -D \"%s\" -o \"-p %d -b%s %s%s\" start",
+ cluster->bindir,
+ log_opts.logdir,
+ SERVER_LOG_FILE, cluster->pgconfig, cluster->port,
(cluster == &new_cluster) ?
" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0" : "",
cluster->pgopts ? cluster->pgopts : "", socket_string);
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index c5ce732ee9..ac7bb79c38 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -768,8 +768,8 @@ psql --username=postgres --file=script.sql postgres
<para>
<application>pg_upgrade</application> creates various working files, such
- as schema dumps, in the current working directory. For security, be sure
- that that directory is not readable or writable by any other users.
+ as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in
+ the directory of the new cluster.
</para>
<para>
--
2.34.1
On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:
On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.I have been looking at this idea, and the result is quite nice, being
simpler than anything that has been proposed on this thread yet. We
get a simpler removal logic, and there is no need to perform any kind
of sanity checks with the output path provided as long as we generate
the paths and the dirs after adjust_data_dir().
...
<para> <application>pg_upgrade</application> creates various working files, such - as schema dumps, in the current working directory. For security, be sure - that that directory is not readable or writable by any other users. + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in + the directory of the new cluster. </para>
Uh, how are we instructing people to delete that pg_upgrade output
directory? If pg_upgrade completes cleanly, would it be removed
automatically?
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On Mon, Jan 24, 2022 at 12:39:30PM -0500, Bruce Momjian wrote:
On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:
On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.I have been looking at this idea, and the result is quite nice, being
simpler than anything that has been proposed on this thread yet. We
get a simpler removal logic, and there is no need to perform any kind
of sanity checks with the output path provided as long as we generate
the paths and the dirs after adjust_data_dir()....
<para> <application>pg_upgrade</application> creates various working files, such - as schema dumps, in the current working directory. For security, be sure - that that directory is not readable or writable by any other users. + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in + the directory of the new cluster. </para>Uh, how are we instructing people to delete that pg_upgrade output
directory? If pg_upgrade completes cleanly, would it be removed
automatically?
Clearly.
@@ -689,28 +751,5 @@ cleanup(void)
/* Remove dump and log files? */
if (!log_opts.retain)
- {
- int dbnum;
- char **filename;
-
- for (filename = output_files; *filename != NULL; filename++)
- unlink(*filename);
-
- /* remove dump files */
- unlink(GLOBALS_DUMP_FILE);
-
- if (old_cluster.dbarr.dbs)
- for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
- {
- char sql_file_name[MAXPGPATH],
- log_file_name[MAXPGPATH];
- DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];
-
- snprintf(sql_file_name, sizeof(sql_file_name), DB_DUMP_FILE_MASK, old_db->db_oid);
- unlink(sql_file_name);
-
- snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
- unlink(log_file_name);
- }
- }
+ rmtree(log_opts.basedir, true);
}
On Mon, Jan 24, 2022 at 11:41:17AM -0600, Justin Pryzby wrote:
On Mon, Jan 24, 2022 at 12:39:30PM -0500, Bruce Momjian wrote:
On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:
On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.I have been looking at this idea, and the result is quite nice, being
simpler than anything that has been proposed on this thread yet. We
get a simpler removal logic, and there is no need to perform any kind
of sanity checks with the output path provided as long as we generate
the paths and the dirs after adjust_data_dir()....
<para> <application>pg_upgrade</application> creates various working files, such - as schema dumps, in the current working directory. For security, be sure - that that directory is not readable or writable by any other users. + as schema dumps, stored within <literal>pg_upgrade_output.d</literal> in + the directory of the new cluster. </para>Uh, how are we instructing people to delete that pg_upgrade output
directory? If pg_upgrade completes cleanly, would it be removed
automatically?Clearly.
OK, thanks. There are really two cleanups --- first, the "log"
directory, and second deletion of the old cluster by running
delete_old_cluster.sh.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On Mon, Jan 24, 2022 at 02:44:21PM -0500, Bruce Momjian wrote:
OK, thanks. There are really two cleanups --- first, the "log"
directory, and second deletion of the old cluster by running
delete_old_cluster.sh.
Yes, this is the same thing as what's done on HEAD with a two-step
cleanup, except that we just only need to remove the log directory
rather than each individual log entry.
--
Michael
On Mon, Jan 24, 2022 at 10:59:40AM +0900, Michael Paquier wrote:
On Thu, Jan 20, 2022 at 07:51:37PM +0900, Michael Paquier wrote:
Neat idea. That would work fine for my case. So I am fine to stick
with this suggestion.I have been looking at this idea, and the result is quite nice, being
simpler than anything that has been proposed on this thread yet. We
get a simpler removal logic, and there is no need to perform any kind
of sanity checks with the output path provided as long as we generate
the paths and the dirs after adjust_data_dir().Thoughts?
Andrew: you wanted to accommodate any change on the build client, right ?
--
Justin
On Tue, Jan 25, 2022 at 10:45:29AM -0600, Justin Pryzby wrote:
Andrew: you wanted to accommodate any change on the build client, right ?
Yes, this is going to need an adjustment of @logfiles in
TestUpgrade.pm, with the addition of
"$tmp_data_dir/pg_update_output.d/log/*.log" to be consistent with the
data fetched for the tests of older branches.
--
Michael
On Wed, Jan 26, 2022 at 09:44:48AM +0900, Michael Paquier wrote:
Yes, this is going to need an adjustment of @logfiles in
TestUpgrade.pm, with the addition of
"$tmp_data_dir/pg_update_output.d/log/*.log" to be consistent with the
data fetched for the tests of older branches.
Bleh. This would point to the old data directory, so this needs to be
"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log"
to point to the upgraded cluster.
--
Michael
On Wed, Jan 26, 2022 at 11:00:28AM +0900, Michael Paquier wrote:
Bleh. This would point to the old data directory, so this needs to be
"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log"
to point to the upgraded cluster.
Please note that I have sent a patch to merge this change in the
buildfarm code. Comments are welcome.
--
Michael
On 1/28/22 08:42, Michael Paquier wrote:
On Wed, Jan 26, 2022 at 11:00:28AM +0900, Michael Paquier wrote:
Bleh. This would point to the old data directory, so this needs to be
"$self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*.log"
to point to the upgraded cluster.Please note that I have sent a patch to merge this change in the
buildfarm code. Comments are welcome.
I have committed this. But it will take time to get every buildfarm own
to upgrade. I will try to make a new release ASAP.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Fri, Jan 28, 2022 at 06:27:29PM -0500, Andrew Dunstan wrote:
I have committed this. But it will take time to get every buildfarm own
to upgrade.
Thanks for that.
I will try to make a new release ASAP.
And thanks for that, as well.
--
Michael
On Sat, Jan 29, 2022 at 09:53:25AM +0900, Michael Paquier wrote:
On Fri, Jan 28, 2022 at 06:27:29PM -0500, Andrew Dunstan wrote:
I have committed this. But it will take time to get every buildfarm own
to upgrade.Thanks for that.
So, it took me some time to get back to this thread, and looked at it
for the last couple of days... The buildfarm client v14 has been
released on the 29th of January, which means that we are good to go.
I have found one issue while reviewing things: the creation of the new
subdirectory and its contents should satisfy group permissions for the
new cluster's data folder, but we were not doing that properly as we
called GetDataDirectoryCreatePerm() after make_outputdirs() so we
missed the proper values for create_mode and umask(). The rest looked
fine, and I got a green CI run on my own repo. Hence, applied.
I'll keep an eye on the buildfarm, in case.
--
Michael
Hi,
On Sun, Feb 06, 2022 at 01:36:07PM +0900, Michael Paquier wrote:
The buildfarm client v14 has been
released on the 29th of January, which means that we are good to go.
I didn't follow that thread closely, but if having the latest buildfarm client
version installed is a hard requirement this will likely be a problem. First,
there was no email to warn buildfarm owners that a new version is available,
and even if there was I doubt that every owner would have updated it since.
Especially since this is the lunar new year period, so at least 2 buildfarm
owners (me included) are on holidays since last week.
On Sun, Feb 06, 2022 at 02:03:44PM +0800, Julien Rouhaud wrote:
I didn't follow that thread closely, but if having the latest buildfarm client
version installed is a hard requirement this will likely be a problem. First,
there was no email to warn buildfarm owners that a new version is available,
and even if there was I doubt that every owner would have updated it since.
Especially since this is the lunar new year period, so at least 2 buildfarm
owners (me included) are on holidays since last week.
The buildfarm will still be able to work as it did so that's not a
hard requirement per-se. The only thing changing is that we would not
find the logs in the event of a failure in the tests of pg_upgrade,
and the buildfarm client is coded to never fail if it does not see
logs in some of the paths it looks at, it just holds the full history
of the paths we have used across the ages.
--
Michael
Michael Paquier <michael@paquier.xyz> writes:
So, it took me some time to get back to this thread, and looked at it
for the last couple of days... The buildfarm client v14 has been
released on the 29th of January, which means that we are good to go.
As already mentioned, there's been no notice to buildfarm owners ...
so has Andrew actually made a release?
regards, tom lane
On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:
Michael Paquier <michael@paquier.xyz> writes:
So, it took me some time to get back to this thread, and looked at it
for the last couple of days... The buildfarm client v14 has been
released on the 29th of January, which means that we are good to go.As already mentioned, there's been no notice to buildfarm owners ...
so has Andrew actually made a release?
There's a v14 release on the github project ([1]https://github.com/PGBuildFarm/client-code/releases/tag/REL_14) from 8 days ago, so it seems
so.
[1]: https://github.com/PGBuildFarm/client-code/releases/tag/REL_14
On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:
As already mentioned, there's been no notice to buildfarm owners ...
so has Andrew actually made a release?
There has been one as of 8 days ago:
https://github.com/PGBuildFarm/client-code/releases
And I have just looked at that as point of reference.
--
Michael
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:
As already mentioned, there's been no notice to buildfarm owners ...
so has Andrew actually made a release?
There has been one as of 8 days ago:
https://github.com/PGBuildFarm/client-code/releases
[ scrapes buildfarm logs ... ]
Not even Andrew's own buildfarm critters are using it, so
permit me leave to doubt that he thinks it's fully baked.
Andrew?
regards, tom lane
On 2/6/22 02:17, Tom Lane wrote:
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Feb 06, 2022 at 01:58:21AM -0500, Tom Lane wrote:
As already mentioned, there's been no notice to buildfarm owners ...
so has Andrew actually made a release?There has been one as of 8 days ago:
https://github.com/PGBuildFarm/client-code/releases[ scrapes buildfarm logs ... ]
Not even Andrew's own buildfarm critters are using it, so
permit me leave to doubt that he thinks it's fully baked.Andrew?
*sigh* Sometimes I have a mind like a sieve. I prepped the release a few
days ago and meant to come back the next morning and send out emails
announcing it, as well as rolling it out to my animals, and got diverted
so that didn't happen and it slipped my mind. I'll go and do those
things now.
But the commit really shouldn't have happened until we know that most
buildfarm owners have installed it. It should have waited wait not just
for the release but for widespread deployment. Otherwise we will just
lose any logging for an error that might appear.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:
*sigh* Sometimes I have a mind like a sieve. I prepped the release a few
days ago and meant to come back the next morning and send out emails
announcing it, as well as rolling it out to my animals, and got diverted
so that didn't happen and it slipped my mind. I'll go and do those
things now.
Thanks. I saw the release listed after a couple of days of
hibernation, and that one week went by since, so I thought that the
timing was pretty good. I did not check the buildfarm members though,
sorry about that.
But the commit really shouldn't have happened until we know that most
buildfarm owners have installed it. It should have waited wait not just
for the release but for widespread deployment. Otherwise we will just
lose any logging for an error that might appear.
Would it be better if I just revert the change for now then and do it
again in one/two weeks? The buildfarm is green, so keeping things as
they are does not sound like a huge deal to me, either, for this
case.
FWIW, I have already switched my own animal to use the newest
buildfarm client.
--
Michael
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:
But the commit really shouldn't have happened until we know that most
buildfarm owners have installed it. It should have waited wait not just
for the release but for widespread deployment. Otherwise we will just
lose any logging for an error that might appear.
Would it be better if I just revert the change for now then and do it
again in one/two weeks?
I don't see a need to revert it.
I note, though, that there's still not been any email to the buildfarm
owners list about this update.
regards, tom lane
On Feb 6, 2022, at 7:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:
But the commit really shouldn't have happened until we know that most
buildfarm owners have installed it. It should have waited wait not just
for the release but for widespread deployment. Otherwise we will just
lose any logging for an error that might appear.Would it be better if I just revert the change for now then and do it
again in one/two weeks?I don't see a need to revert it.
I note, though, that there's still not been any email to the buildfarm
owners list about this update.
It’s stuck in moderation
Cheers
Andrew
On 2/6/22 19:39, Tom Lane wrote:
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Feb 06, 2022 at 08:32:59AM -0500, Andrew Dunstan wrote:
But the commit really shouldn't have happened until we know that most
buildfarm owners have installed it. It should have waited wait not just
for the release but for widespread deployment. Otherwise we will just
lose any logging for an error that might appear.Would it be better if I just revert the change for now then and do it
again in one/two weeks?I don't see a need to revert it.
I note, though, that there's still not been any email to the buildfarm
owners list about this update.
The announcement was held up in list moderation for 20 hours or so.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Mon, Feb 07, 2022 at 11:00:22AM -0500, Andrew Dunstan wrote:
On 2/6/22 19:39, Tom Lane wrote:
I note, though, that there's still not been any email to the buildfarm
owners list about this update.The announcement was held up in list moderation for 20 hours or so.
I've certainly experienced way more than that in the past. Are volunteers
needed?