pg_upgrade parallelism
Hi,
Currently docs about pg_upgrade says:
"""
<para>
The <option>--jobs</option> option allows multiple CPU cores to be used
for copying/linking of files and to dump and reload database schemas
in parallel; a good place to start is the maximum of the number of
CPU cores and tablespaces. This option can dramatically reduce the
time to upgrade a multi-database server running on a multiprocessor
machine.
</para>
"""
Which make the user think that the --jobs option could use all CPU
cores. Which is not true. Or that it has anything to do with multiple
databases, which is true only to some extent.
What that option really improves are upgrading servers with multiple
tablespaces, of course if --link or --clone are used pg_upgrade is still
very fast but used with the --copy option is not what one could expect.
As an example, a customer with a 25Tb database, 40 cores and lots of ram
used --jobs=35 and got only 7 processes (they have 6 tablespaces) and
the disks where not used at maximum speed either. They expected 35
processes copying lots of files at the same time.
So, first I would like to improve documentation. What about something
like the attached?
Now, a couple of questions:
- in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to
determine the amount of bytes that should be used in read()/write() to
copy the relfilenode segments. And we define it as (50 * BLCKSZ),
which is 400Kb. Isn't this too small?
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.
I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.
--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
Attachments:
pg_upgrade_improve_docs.patchtext/x-diff; charset=us-asciiDownload
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index 20efdd7..74eaaee 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -406,10 +406,10 @@ NET STOP postgresql-&majorversion;
<para>
The <option>--jobs</option> option allows multiple CPU cores to be used
for copying/linking of files and to dump and reload database schemas
- in parallel; a good place to start is the maximum of the number of
- CPU cores and tablespaces. This option can dramatically reduce the
- time to upgrade a multi-database server running on a multiprocessor
- machine.
+ in parallel; a good place to start is the maximum of: the number of
+ CPU cores or tablespaces. This option can dramatically reduce the
+ time to upgrade a server with multiple tablespaces running on a
+ multiprocessor machine.
</para>
<para>
On Wed, 2021-11-17 at 14:44 -0500, Jaime Casanova wrote:
I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.
I looked into speeding this up a while back, too. For the use case I
was looking at -- Greenplum, which has huge numbers of relfilenodes --
spinning disk I/O was absolutely the bottleneck and that is typically
not easily parallelizable. (In fact I felt at the time that Andres'
work on async I/O might be a better way forward, at least for some
filesystems.)
But you mentioned that you were seeing disks that weren't saturated, so
maybe some CPU optimization is still valuable? I am a little skeptical
that more parallelism is the way to do that, but numbers trump my
skepticism.
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.
I have idly wondered if something based on splice() would be faster,
but I haven't actually tried it.
But there is now support for copy-on-write with the clone mode, isn't
there? Or are you not able to take advantage of it?
--Jacob
On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:
Hi,
Currently docs about pg_upgrade says:
"""
<para>
The <option>--jobs</option> option allows multiple CPU cores to be used
for copying/linking of files and to dump and reload database schemas
in parallel; a good place to start is the maximum of the number of
CPU cores and tablespaces. This option can dramatically reduce the
time to upgrade a multi-database server running on a multiprocessor
machine.
</para>
"""Which make the user think that the --jobs option could use all CPU
cores. Which is not true. Or that it has anything to do with multiple
databases, which is true only to some extent.What that option really improves are upgrading servers with multiple
tablespaces, of course if --link or --clone are used pg_upgrade is still
very fast but used with the --copy option is not what one could expect.
As an example, a customer with a 25Tb database, 40 cores and lots of ram
used --jobs=35 and got only 7 processes (they have 6 tablespaces) and
the disks where not used at maximum speed either. They expected 35
processes copying lots of files at the same time.
I would test this. How long does it take to cp -r the data dirs vs pg_upgrade
them ? If running 7 "cp" in parallel is faster than the "copy" portion of
pg_upgrade -j7, then pg_upgrade's file copy should be optimized.
But if it's not faster, then maybe should look at other options, like your idea
to copy filenodes (or their segments) in parallel.
So, first I would like to improve documentation. What about something
like the attached?
The relevant history is in commits
6f1b9e4efd94fc644f5de5377829d42e48c3c758
a89c46f9bc314ed549245d888da09b8c5cace104
--jobs originally parallelized pg_dump and pg_restore, and then added
copying/linking. So the docs should mention tablespaces, as you said, but
should also mention databases. It may not be an issue for you, but pg_restore
is the slowest part of our pg_upgrades, since we have many partitions.
Now, a couple of questions:
- in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to
determine the amount of bytes that should be used in read()/write() to
copy the relfilenode segments. And we define it as (50 * BLCKSZ),
which is 400Kb. Isn't this too small?
Maybe - you'll have to check :)
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.
No portable way. Linux has this:
https://man7.org/linux/man-pages/man2/copy_file_range.2.html
But I just read:
| First support for cross-filesystem copies was introduced in Linux
| 5.3. Older kernels will return -EXDEV when cross-filesystem
| copies are attempted.
To me that sounds like it may not be worth it, at least not quite yet.
But it would be good to test.
I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.
My concern would be if there's too many jobs and the storage bogs down, then it
could be slower.
I think something like that should have a separate option, not just --jobs.
Like --parallel-in-tablespace. The original implementation puts processes
across CPUs (for pg_dump/restore) and tablespaces (for I/O). Maybe it should
be possible to control those with separate options, too.
FWIW, we typically have only one database of any significance, but we do use
tablespaces, and I've used pg_upgrade --link since c. v9.0. --jobs probably
helps pg_dump/restore at few customers who have multiple DBs. But it probably
doesn't help to parallelize --link across tablespaces (since our tablespaces
are actually on the same storage devices, but with different filesystems).
I anticipate it might even make a few customers upgrade a bit slower, since
--link is a metadata operation and probably involves a lot of FS barriers, for
which the storage may be inadequate to support in parallel.
--
Justin
On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:
Hi,
Currently docs about pg_upgrade says:
"""
<para>
The <option>--jobs</option> option allows multiple CPU cores to be used
for copying/linking of files and to dump and reload database schemas
in parallel; a good place to start is the maximum of the number of
CPU cores and tablespaces. This option can dramatically reduce the
time to upgrade a multi-database server running on a multiprocessor
machine.
</para>
"""Which make the user think that the --jobs option could use all CPU
cores. Which is not true. Or that it has anything to do with multiple
databases, which is true only to some extent.
Uh, the behavior is a little more complicated. The --jobs option in
pg_upgrade is used to parallelize three operations:
* copying relation files
* dumping old cluster objects (via parallel_exec_prog())
* creating objects in the new cluster (via parallel_exec_prog())
The last two basically operate on databases in parallel --- they can't
dump/load a single database in parallel, but they can dump/load several
databases in parallel.
The documentation you quote above is saying that you set jobs based on
the number of CPUs (for dump/reload which are assumed to be CPU bound)
and the number of tablespaces (which is assumed to be I/O bound).
I am not sure how we can improve that text. We could just say the max
of the number of databases and tablespaces, but then the number of CPUs
needs to be involved since, if you only have one CPU core, you don't
want parallel dumps/loads happening since that will just cause CPU
contention with little benefit. We mention tablespaces because even if
you only have once CPU core, since tablespace copying is I/O bound, you
can still benefit from --jobs.
What that option really improves are upgrading servers with multiple
tablespaces, of course if --link or --clone are used pg_upgrade is still
very fast but used with the --copy option is not what one could expect.As an example, a customer with a 25Tb database, 40 cores and lots of ram
used --jobs=35 and got only 7 processes (they have 6 tablespaces) and
the disks where not used at maximum speed either. They expected 35
processes copying lots of files at the same time.So, first I would like to improve documentation. What about something
like the attached?Now, a couple of questions:
- in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to
determine the amount of bytes that should be used in read()/write() to
copy the relfilenode segments. And we define it as (50 * BLCKSZ),
which is 400Kb. Isn't this too small?
Uh, if you find that increasing that helps, we can increase it --- I
don't know how that value was chosen. However, we are really just
copying the data into the kernel, not forcing it to storage, so I don't
know if a larger value would help.
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.
Uh, we could use buffered I/O, I guess, but again, would there be a
benefit?
I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.
I think we were assuming the copy would be I/O bound and that
parallelism wouldn't help in a single tablespace.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On Wed, 2021-11-17 at 14:34 -0600, Justin Pryzby wrote:
On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.But I just read:
First support for cross-filesystem copies was introduced in Linux
5.3. Older kernels will return -EXDEV when cross-filesystem
copies are attempted.To me that sounds like it may not be worth it, at least not quite yet.
But it would be good to test.
I think a downside of copy_file_range() is that filesystems might
perform a reflink under us, and to me that seems like something that
needs to be opted into via clone mode.
(https://lwn.net/Articles/846403/ is also good reading on some sharp
edges, though I doubt many of them apply to our use case.)
--Jacob
On Tue, Nov 23, 2021 at 06:54:03PM +0000, Jacob Champion wrote:
On Wed, 2021-11-17 at 14:34 -0600, Justin Pryzby wrote:
On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.But I just read:
First support for cross-filesystem copies was introduced in Linux
5.3. Older kernels will return -EXDEV when cross-filesystem
copies are attempted.To me that sounds like it may not be worth it, at least not quite yet.
But it would be good to test.
I realized that pg_upgrade doesn't copy between filesystems - it copies from
$tablespace/PG13/NNN to $tblespace/PG14/NNN. So that's no issue.
And I did a bit of testing with this last weekend, and saw no performance
benefit from a larger buffersize, nor from copy_file_range, nor from libc stdio
(fopen/fread/fwrite/fclose).
I think a downside of copy_file_range() is that filesystems might
perform a reflink under us, and to me that seems like something that
needs to be opted into via clone mode.
You're referring to this:
| copy_file_range() gives filesystems an opportunity to implement "copy
| acceleration" techniques, such as the use of reflinks (i.e., two or more
| i-nodes that share pointers to the same copy-on-write disk blocks) or
| server-side-copy (in the case of NFS).
I don't see why that's an issue though ? It's COW, not hardlink. It'd be the
same as if the filesystem implemented deduplication, right? postgres shouldn't
notice nor care.
I guess you're concerned for someone who wants to be able to run pg_upgrade and
preserve the ability to start the old cluster in addition to the new. But
that'd work fine on a COW filesystem, right ?
(https://lwn.net/Articles/846403/ is also good reading on some sharp
edges, though I doubt many of them apply to our use case.)
Yea, it doesn't seem the issues are relevant, other than to indicate that the
syscall is still evolving, which supports my initial conclusion.
--
Justin
On Tue, 2021-11-23 at 13:51 -0600, Justin Pryzby wrote:
I guess you're concerned for someone who wants to be able to run pg_upgrade and
preserve the ability to start the old cluster in addition to the new.
Right. What I'm worried about is, if disk space or write performance on
the new cluster is a concern, then having a copy-mode upgrade silently
use copy-on-write could be a problem if the DBA needs copy mode to
actually copy.
--Jacob
Jacob Champion <pchampion@vmware.com> writes:
Right. What I'm worried about is, if disk space or write performance on
the new cluster is a concern, then having a copy-mode upgrade silently
use copy-on-write could be a problem if the DBA needs copy mode to
actually copy.
Particularly for the cross-filesystem case, where it would not be
unreasonable to expect that one could dismount or destroy the old FS
immediately afterward. I don't know if recent kernels try to make
that safe/transparent.
regards, tom lane
On Wed, Nov 17, 2021 at 08:04:41PM +0000, Jacob Champion wrote:
On Wed, 2021-11-17 at 14:44 -0500, Jaime Casanova wrote:
I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.I looked into speeding this up a while back, too. For the use case I
was looking at -- Greenplum, which has huge numbers of relfilenodes --
spinning disk I/O was absolutely the bottleneck and that is typically
not easily parallelizable. (In fact I felt at the time that Andres'
work on async I/O might be a better way forward, at least for some
filesystems.)But you mentioned that you were seeing disks that weren't saturated, so
maybe some CPU optimization is still valuable? I am a little skeptical
that more parallelism is the way to do that, but numbers trump my
skepticism.
Sorry for being unresponsive too long. I did add a new --jobs-per-disk
option, this is a simple patch I made for the customer and ignored all
WIN32 parts because I don't know anything about that part. I was wanting
to complete that part but it has been in the same state two months now.
AFAIU, it seems there is a different struct for the parameters of the
function that will be called on the thread.
I also decided to create a new reap_*_child() function for using with
the new parameter.
Now, the customer went from copy 25Tb in 6 hours to 4h 45min, which is
an improvement of 20%!
- why we read()/write() at all? is not a faster way of copying the file?
i'm asking that because i don't actually know.I have idly wondered if something based on splice() would be faster,
but I haven't actually tried it.
I tried and got no better result.
But there is now support for copy-on-write with the clone mode, isn't
there? Or are you not able to take advantage of it?
That's sadly not possible because those are different disks, and yes I
know that's something that pg_upgrade normally doesn't allow but is not
difficult to make it happen.
--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
Attachments:
0001-Add-jobs-per-disk-option-to-allow-multiple-processes.patchtext/x-diff; charset=us-asciiDownload
From 0d04f79cb51d6be0ced9c6561cfca5bfe18c4bdd Mon Sep 17 00:00:00 2001
From: Jaime Casanova <jcasanov@systemguards.com.ec>
Date: Wed, 15 Dec 2021 12:14:44 -0500
Subject: [PATCH] Add --jobs-per-disk option to allow multiple processes per
tablespace
This option is independent of the --jobs one. It's will fork new processes
to copy the different segments of a relfilenode in parallel.
---
src/bin/pg_upgrade/option.c | 8 ++-
src/bin/pg_upgrade/parallel.c | 93 ++++++++++++++++++++++++++++++++
src/bin/pg_upgrade/pg_upgrade.h | 4 ++
src/bin/pg_upgrade/relfilenode.c | 59 +++++++++++---------
4 files changed, 139 insertions(+), 25 deletions(-)
diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 66fe16964e..46b1913a42 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -54,6 +54,7 @@ parseCommandLine(int argc, char *argv[])
{"link", no_argument, NULL, 'k'},
{"retain", no_argument, NULL, 'r'},
{"jobs", required_argument, NULL, 'j'},
+ {"jobs-per-disks", required_argument, NULL, 'J'},
{"socketdir", required_argument, NULL, 's'},
{"verbose", no_argument, NULL, 'v'},
{"clone", no_argument, NULL, 1},
@@ -103,7 +104,7 @@ parseCommandLine(int argc, char *argv[])
if (os_user_effective_id == 0)
pg_fatal("%s: cannot be run as root\n", os_info.progname);
- while ((option = getopt_long(argc, argv, "d:D:b:B:cj:kNo:O:p:P:rs:U:v",
+ while ((option = getopt_long(argc, argv, "d:D:b:B:cj:J:kNo:O:p:P:rs:U:v",
long_options, &optindex)) != -1)
{
switch (option)
@@ -132,6 +133,10 @@ parseCommandLine(int argc, char *argv[])
user_opts.jobs = atoi(optarg);
break;
+ case 'J':
+ user_opts.jobs_per_disk = atoi(optarg);
+ break;
+
case 'k':
user_opts.transfer_mode = TRANSFER_MODE_LINK;
break;
@@ -291,6 +296,7 @@ usage(void)
printf(_(" -d, --old-datadir=DATADIR old cluster data directory\n"));
printf(_(" -D, --new-datadir=DATADIR new cluster data directory\n"));
printf(_(" -j, --jobs=NUM number of simultaneous processes or threads to use\n"));
+ printf(_(" -J, --jobs_per_disk=NUM number of simultaneous processes or threads to use per tablespace\n"));
printf(_(" -k, --link link instead of copying files to new cluster\n"));
printf(_(" -N, --no-sync do not wait for changes to be written safely to disk\n"));
printf(_(" -o, --old-options=OPTIONS old cluster options to pass to the server\n"));
diff --git a/src/bin/pg_upgrade/parallel.c b/src/bin/pg_upgrade/parallel.c
index ee7364da3b..82f698a9ab 100644
--- a/src/bin/pg_upgrade/parallel.c
+++ b/src/bin/pg_upgrade/parallel.c
@@ -17,6 +17,9 @@
#include "pg_upgrade.h"
static int parallel_jobs;
+static int current_jobs = 0;
+
+static bool reap_subchild(bool wait_for_child);
#ifdef WIN32
/*
@@ -277,6 +280,60 @@ win32_transfer_all_new_dbs(transfer_thread_arg *args)
#endif
+
+/*
+ * parallel_process_relfile_segment()
+ *
+ * Copy or link file from old cluster to new one. If vm_must_add_frozenbit
+ * is true, visibility map forks are converted and rewritten, even in link
+ * mode.
+ */
+void
+parallel_process_relfile_segment(FileNameMap *map, const char *type_suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file)
+{
+#ifndef WIN32
+ pid_t child;
+#else
+ HANDLE child;
+ transfer_thread_arg *new_arg;
+#endif
+ if (user_opts.jobs <= 1 || user_opts.jobs_per_disk <= 1)
+ process_relfile_segment(map, type_suffix, vm_must_add_frozenbit, old_file, new_file);
+ else
+ {
+ /* parallel */
+
+ /* harvest any dead children */
+ while (reap_subchild(false) == true)
+ ;
+
+ /* must we wait for a dead child? use a maximum of 3 childs per tablespace */
+ if (current_jobs >= user_opts.jobs_per_disk)
+ reap_subchild(true);
+
+ /* set this before we start the job */
+ current_jobs++;
+
+ /* Ensure stdio state is quiesced before forking */
+ fflush(NULL);
+
+#ifndef WIN32
+ child = fork();
+ if (child == 0)
+ {
+ process_relfile_segment(map, type_suffix, vm_must_add_frozenbit, old_file, new_file);
+ /* use _exit to skip atexit() functions */
+ _exit(0);
+ }
+ else if (child < 0)
+ /* fork failed */
+ pg_fatal("could not create worker process: %s\n", strerror(errno));
+#endif
+ }
+}
+
+
+
/*
* collect status from a completed worker child
*/
@@ -345,3 +402,39 @@ reap_child(bool wait_for_child)
return true;
}
+
+
+
+
+/*
+ * collect status from a completed worker subchild
+ */
+static bool
+reap_subchild(bool wait_for_child)
+{
+#ifndef WIN32
+ int work_status;
+ pid_t child;
+#else
+ int thread_num;
+ DWORD res;
+#endif
+
+ if (user_opts.jobs <= 1 || current_jobs == 0)
+ return false;
+
+#ifndef WIN32
+ child = waitpid(-1, &work_status, wait_for_child ? 0 : WNOHANG);
+ if (child == (pid_t) -1)
+ pg_fatal("waitpid() failed: %s\n", strerror(errno));
+ if (child == 0)
+ return false; /* no children, or no dead children */
+ if (work_status != 0)
+ pg_fatal("child process exited abnormally: status %d\n", work_status);
+#endif
+
+ /* do this after job has been removed */
+ current_jobs--;
+
+ return true;
+}
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 22169f1002..adcb24ffea 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -282,6 +282,7 @@ typedef struct
bool do_sync; /* flush changes to disk */
transferMode transfer_mode; /* copy files or link them? */
int jobs; /* number of processes/threads to use */
+ int jobs_per_disk; /* number of processes/threads to use */
char *socketdir; /* directory to use for Unix sockets */
} UserOpts;
@@ -450,4 +451,7 @@ void parallel_exec_prog(const char *log_file, const char *opt_log_file,
void parallel_transfer_all_new_dbs(DbInfoArr *old_db_arr, DbInfoArr *new_db_arr,
char *old_pgdata, char *new_pgdata,
char *old_tablespace);
+
+void process_relfile_segment(FileNameMap *map, const char *suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file);
+void parallel_process_relfile_segment(FileNameMap *map, const char *suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file);
bool reap_child(bool wait_for_child);
diff --git a/src/bin/pg_upgrade/relfilenode.c b/src/bin/pg_upgrade/relfilenode.c
index 5dbefbceaf..8a7c49efaa 100644
--- a/src/bin/pg_upgrade/relfilenode.c
+++ b/src/bin/pg_upgrade/relfilenode.c
@@ -17,6 +17,7 @@
static void transfer_single_new_db(FileNameMap *maps, int size, char *old_tablespace);
static void transfer_relfile(FileNameMap *map, const char *suffix, bool vm_must_add_frozenbit);
+void process_relfile_segment(FileNameMap *map, const char *suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file);
/*
@@ -232,30 +233,40 @@ transfer_relfile(FileNameMap *map, const char *type_suffix, bool vm_must_add_fro
/* Copying files might take some time, so give feedback. */
pg_log(PG_STATUS, "%s", old_file);
- if (vm_must_add_frozenbit && strcmp(type_suffix, "_vm") == 0)
+ parallel_process_relfile_segment(map, type_suffix, vm_must_add_frozenbit, old_file, new_file);
+ }
+}
+
+
+
+void
+process_relfile_segment(FileNameMap *map, const char *type_suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file)
+{
+
+ if (vm_must_add_frozenbit && strcmp(type_suffix, "_vm") == 0)
+ {
+ /* Need to rewrite visibility map format */
+ pg_log(PG_VERBOSE, "rewriting \"%s\" to \"%s\"\n",
+ old_file, new_file);
+ rewriteVisibilityMap(old_file, new_file, map->nspname, map->relname);
+ }
+ else
+ switch (user_opts.transfer_mode)
{
- /* Need to rewrite visibility map format */
- pg_log(PG_VERBOSE, "rewriting \"%s\" to \"%s\"\n",
- old_file, new_file);
- rewriteVisibilityMap(old_file, new_file, map->nspname, map->relname);
+ case TRANSFER_MODE_CLONE:
+ pg_log(PG_VERBOSE, "cloning \"%s\" to \"%s\"\n",
+ old_file, new_file);
+ cloneFile(old_file, new_file, map->nspname, map->relname);
+ break;
+ case TRANSFER_MODE_COPY:
+ pg_log(PG_VERBOSE, "copying \"%s\" to \"%s\"\n",
+ old_file, new_file);
+ copyFile(old_file, new_file, map->nspname, map->relname);
+ break;
+ case TRANSFER_MODE_LINK:
+ pg_log(PG_VERBOSE, "linking \"%s\" to \"%s\"\n",
+ old_file, new_file);
+ linkFile(old_file, new_file, map->nspname, map->relname);
+ break;
}
- else
- switch (user_opts.transfer_mode)
- {
- case TRANSFER_MODE_CLONE:
- pg_log(PG_VERBOSE, "cloning \"%s\" to \"%s\"\n",
- old_file, new_file);
- cloneFile(old_file, new_file, map->nspname, map->relname);
- break;
- case TRANSFER_MODE_COPY:
- pg_log(PG_VERBOSE, "copying \"%s\" to \"%s\"\n",
- old_file, new_file);
- copyFile(old_file, new_file, map->nspname, map->relname);
- break;
- case TRANSFER_MODE_LINK:
- pg_log(PG_VERBOSE, "linking \"%s\" to \"%s\"\n",
- old_file, new_file);
- linkFile(old_file, new_file, map->nspname, map->relname);
- }
- }
}
--
2.20.1