pg_restore --multi-thread
I know we've already had a discussion on the naming of the pg_restore -m
option, but in any case this description in pg_restore --help is confusing:
-m, --multi-thread=NUM use this many parallel connections to restore
Either it is using that many threads in the client, or it is using that many
connections to the server. I assume the implementation does approximately
both, but we should be clear about what we promise to the user. Either:
Reserve this many connections on the server. Or: Reserve this many threads
in the kernel of the client. The documentation in the reference/man page is
equally confused.
Also, the term "multi" is redundant, because whether it is multi or single is
obviously determined by the value of NUM.
Peter Eisentraut wrote:
I know we've already had a discussion on the naming of the pg_restore -m
option, but in any case this description in pg_restore --help is confusing:-m, --multi-thread=NUM use this many parallel connections to restore
Either it is using that many threads in the client, or it is using that many
connections to the server. I assume the implementation does approximately
both, but we should be clear about what we promise to the user. Either:
Reserve this many connections on the server. Or: Reserve this many threads
in the kernel of the client. The documentation in the reference/man page is
equally confused.Also, the term "multi" is redundant, because whether it is multi or single is
obviously determined by the value of NUM.
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.
I'm not sure what you mean about reserving threads in the client kernel.
I also don't really understand what is confusing about the description.
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.
How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.
regards, tom lane
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.
--num-workers or --num-connections would both work.
Joshua D. Drake
regards, tom lane
--
PostgreSQL - XMPP: jdrake@jabber.postgresql.org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.--num-workers or --num-connections would both work.
*shrug* whatever. What should the short option be (if any?). -n is
taken, so -N ?
cheers
andrew
On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.--num-workers or --num-connections would both work.
*shrug* whatever. What should the short option be (if any?). -n is
taken, so -N ?
Works for me.
cheers
andrew
--
PostgreSQL - XMPP: jdrake@jabber.postgresql.org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Joshua D. Drake a �crit :
On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.--num-workers or --num-connections would both work.
*shrug* whatever. What should the short option be (if any?). -n is
taken, so -N ?Works for me.
is -j already affected ?
cheers
andrew
- --
C�dric Villemain
Administrateur de Base de Donn�es
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEARECAAYFAkmUcvUACgkQo/dppWjpEvzT5gCg44yo8CbfT3AAevzbPXphqu3K
oeUAnAy6/epLlwe7DWWneIB8XVeDIu/+
=Q8iq
-----END PGP SIGNATURE-----
On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake <jd@commandprompt.com>wrote:
--num-workers or --num-connections would both work.
--num-parallel?
--
Jonah H. Harris, Senior DBA
myYearbook.com
On 2009-02-12, at 14:15 , Jonah H. Harris wrote:
On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake <jd@commandprompt.com
wrote:
--num-workers or --num-connections would both work.
--num-parallel?
--num-concurrent?
Michael Glaesemann
michael.glaesemann@myyearbook.com
On Thu, Feb 12, 2009 at 02:16:39PM -0500, Michael Glaesemann wrote:
On 2009-02-12, at 14:15 , Jonah H. Harris wrote:
On Thu, Feb 12, 2009 at 11:37 AM, Joshua D. Drake <jd@commandprompt.com
wrote:
--num-workers or --num-connections would both work.
--num-parallel?
--num-concurrent?
--num-bikeshed? ;)
Cheers,
David (purple!)
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
Andrew Dunstan wrote:
I also don't really understand what is confusing about the description.
Where does the benefit of using it come from? When would one want to
use it? Is it because the parallelization happens on the client or on
the server? Does it happen because to CPU parallelization or because of
disk access parallelization? Is it useful to use it on multi-CPU
systems or on multi-disk systems? The current description implies a bit
of each, I think. And it is not clear what a good number to choose is.
On Thursday 12 February 2009 11:50:26 Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case).
In either case, the program will use up to NUM concurrent connections
to the server.How about calling it --num-connections or something like that? I
agree with Peter that "thread" is not the best terminology on
platforms where there is no threading involved.--num-workers or --num-connections would both work.
*shrug* whatever. What should the short option be (if any?). -n is
taken, so -N ?Works for me.
yikes... -n and -N have specific meaning to pg_dump, I think keeping
consistency with that in pg_restore would be a bonus. (I still see people get
confused because -d work differently between those two apps)
Possibly -w might work, which could expand to --workers, which glosses over
the thread/process difference, is also be available for pg_dump, and has
existing mindshare with autovacuum workers.
not having a short option seems ok to me too, but I really think -N is a bad
idea.
--
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
C�dric Villemain a �crit :
Joshua D. Drake a �crit :
On Thu, 2009-02-12 at 11:47 -0500, Andrew Dunstan wrote:
Joshua D. Drake wrote:
On Thu, 2009-02-12 at 11:32 -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
The implementation is actually different across platforms: on Windows
the workers are genuine threads, while elsewhere they are forked
children in the same fashion as the backend (non-EXEC_BACKEND case). In
either case, the program will use up to NUM concurrent connections to
the server.How about calling it --num-connections or something like that? I agree
with Peter that "thread" is not the best terminology on platforms where
there is no threading involved.--num-workers or --num-connections would both work.
*shrug* whatever. What should the short option be (if any?). -n is
taken, so -N ?Works for me.
is -j already affected ?
else (like make):
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (pg_restore) to run simultaneously. If the -j
option is given without an argument, pg_restore will not limit the number of
jobs that can run simultaneously.
cheers
andrew
- --
C�dric Villemain
Administrateur de Base de Donn�es
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEARECAAYFAkmZSYkACgkQo/dppWjpEvwO8wCfUFztxS7cmRX+hhbVphfqqDzo
ZzUAniFwmwhI9y6f9Mndg9CPGlQiOaae
=fDYZ
-----END PGP SIGNATURE-----
On Mon, Feb 16, 2009 at 12:10 PM, Cédric Villemain
<cedric.villemain@dalibo.com> wrote:
is -j already affected ?
else (like make):
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (pg_restore) to run simultaneously. If the -j
option is given without an argument, pg_restore will not limit the number of
jobs that can run simultaneously.
I like both -j and -w.
-j because we all know "make -j"
-w because i like --num-workers
--
F4FQM
Kerunix Flan
Laurent Laborde
C�dric Villemain wrote:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (pg_restore) to run simultaneously. If the -j
option is given without an argument, pg_restore will not limit the number of
jobs that can run simultaneously.
Quite apart from anything else, this description is almost 100% dead
wrong. The argument is not optional at all, and there is no unlimited
parallelism. If you want to know how it actually works look at the dev docs.
cheers
andrew
Andrew Dunstan wrote:
C�dric Villemain wrote:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (pg_restore) to run simultaneously.
If the -j
option is given without an argument, pg_restore will not limit the
number of
jobs that can run simultaneously.
Quite apart from anything else, this description is almost 100% dead
wrong. The argument is not optional at all, and there is no unlimited
parallelism. If you want to know how it actually works look at the dev
docs.
What I'm still missing here is a piece of documentation or a guideline
that says when a given number of threads/jobs/workers would be
appropriate. For make -j, this is pretty clear: If you have N CPUs to
spare, use -j N. For pg_restore, this is not made clear: Is it the
number of CPUs on the client or the server or the number of disks on the
client or the server or perhaps a combination of this or something else?
Peter Eisentraut wrote:
Andrew Dunstan wrote:
C�dric Villemain wrote:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (pg_restore) to run
simultaneously. If the -j
option is given without an argument, pg_restore will not limit the
number of
jobs that can run simultaneously.Quite apart from anything else, this description is almost 100% dead
wrong. The argument is not optional at all, and there is no
unlimited parallelism. If you want to know how it actually works look
at the dev docs.What I'm still missing here is a piece of documentation or a guideline
that says when a given number of threads/jobs/workers would be
appropriate. For make -j, this is pretty clear: If you have N CPUs to
spare, use -j N. For pg_restore, this is not made clear: Is it the
number of CPUs on the client or the server or the number of disks on
the client or the server or perhaps a combination of this or something
else?
The short answer is that we don't know yet. There is anecdotal evidence
that the number of CPUs on the server is a good place to start, but we
should be honest enough to say that this is a new feature and we are
still gathering information about its performance. If you want to give
some advice, then I think the best advice is to try a variety of
settings to see what works best for you, and if you have a good set of
figures report it back to us.
cheers
andrew
On Fri, 2009-02-20 at 09:33 -0500, Andrew Dunstan wrote:
The short answer is that we don't know yet. There is anecdotal evidence
that the number of CPUs on the server is a good place to start, but we
should be honest enough to say that this is a new feature and we are
still gathering information about its performance. If you want to give
some advice, then I think the best advice is to try a variety of
settings to see what works best for you, and if you have a good set of
figures report it back to us.
There has been some fairly heavy testing and research that caused the
patch in the first place. The thread is here:
http://archives.postgresql.org/pgsql-hackers/2008-02/msg00695.php
It is a long thread. The end was result was the fastest restore time for
220G was performed with 24 threads with an 8 core box. It came in at 3.5
hours.
http://archives.postgresql.org/pgsql-hackers/2008-02/msg01092.php
It is important to point out that this was a machine with 50 spindles.
Which is where your bottleneck is going to be immediately after solving
the CPU bound nature of the problem.
So although the CPU question is easily answered, the IO is not. IO is
extremely variable in its performance.
Sincerely,
Joshua D. Drake
--
PostgreSQL - XMPP: jdrake@jabber.postgresql.org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
Joshua D. Drake wrote:
On Fri, 2009-02-20 at 09:33 -0500, Andrew Dunstan wrote:
The short answer is that we don't know yet. There is anecdotal evidence
that the number of CPUs on the server is a good place to start, but we
should be honest enough to say that this is a new feature and we are
still gathering information about its performance. If you want to give
some advice, then I think the best advice is to try a variety of
settings to see what works best for you, and if you have a good set of
figures report it back to us.There has been some fairly heavy testing and research that caused the
patch in the first place. The thread is here:http://archives.postgresql.org/pgsql-hackers/2008-02/msg00695.php
It is a long thread. The end was result was the fastest restore time for
220G was performed with 24 threads with an 8 core box. It came in at 3.5
hours.http://archives.postgresql.org/pgsql-hackers/2008-02/msg01092.php
It is important to point out that this was a machine with 50 spindles.
Which is where your bottleneck is going to be immediately after solving
the CPU bound nature of the problem.So although the CPU question is easily answered, the IO is not. IO is
extremely variable in its performance.
Yes, quite true. But parallel restore doesn't work quite the same way
your original shell scripts did. It tries harder to keep the job pool
continuously occupied, and so its best number of jobs is likely to be a
bit lower then yours.
But you are right that there isn't a simple formula.
cheers
andrew
On Fri, Feb 20, 2009 at 09:22:58AM -0800, Joshua D. Drake wrote:
On Fri, 2009-02-20 at 09:33 -0500, Andrew Dunstan wrote:
The short answer is that we don't know yet. There is anecdotal evidence
that the number of CPUs on the server is a good place to start, but we
should be honest enough to say that this is a new feature and we are
still gathering information about its performance. If you want to give
some advice, then I think the best advice is to try a variety of
settings to see what works best for you, and if you have a good set of
figures report it back to us.There has been some fairly heavy testing and research that caused the
patch in the first place. The thread is here:http://archives.postgresql.org/pgsql-hackers/2008-02/msg00695.php
It is a long thread. The end was result was the fastest restore time for
220G was performed with 24 threads with an 8 core box. It came in at 3.5
hours.http://archives.postgresql.org/pgsql-hackers/2008-02/msg01092.php
It is important to point out that this was a machine with 50 spindles.
Which is where your bottleneck is going to be immediately after solving
the CPU bound nature of the problem.So although the CPU question is easily answered, the IO is not. IO is
extremely variable in its performance.Sincerely,
Joshua D. Drake
I also ran some tests against a more modest system that was still
showing a performance improvement at (number-of-cores * 2):
http://archives.postgresql.org/pgsql-hackers/2008-11/msg01399.php
I think that a good starting point for any use should be the number
of cores given these two data points.
Cheers,
Ken
Andrew Dunstan <andrew@dunslane.net> wrote:
Joshua D. Drake wrote:
the fastest restore time for
220G was performed with 24 threads with an 8 core box.
It is important to point out that this was a machine with 50
spindles.
Which is where your bottleneck is going to be immediately after
solving
the CPU bound nature of the problem.
But you are right that there isn't a simple formula.
Perhaps the greater of the number of CPUs or effective spindles?
(24 sounds suspiciously close to effective spindles on a 50 spindle
box
with RAID 10.)
-Kevin
On Fri, 2009-02-20 at 11:57 -0600, Kevin Grittner wrote:
But you are right that there isn't a simple formula.
Perhaps the greater of the number of CPUs or effective spindles?
(24 sounds suspiciously close to effective spindles on a 50 spindle
box
with RAID 10.)
It does except that you aren't accounting for 7200RPM vs 10k vs 15k vs
iSCSI vs FibreChannel etc...
You would have to literally do the math to figure it all out. Those 50
spindles were DAS. You go iSCSI and all of a sudden you have turned
those 50 spindles into and effective 8 DAS spindles. Not to mention if
you only have a single path for your FibreChannel etc...
Joshua D. Drake
-Kevin
--
PostgreSQL - XMPP: jdrake@jabber.postgresql.org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
On Thursday 12 February 2009 17:41:01 Peter Eisentraut wrote:
I know we've already had a discussion on the naming of the pg_restore -m
option, but in any case this description in pg_restore --help is confusing:-m, --multi-thread=NUM use this many parallel connections to restore
Either it is using that many threads in the client, or it is using that
many connections to the server. I assume the implementation does
approximately both, but we should be clear about what we promise to the
user. Either: Reserve this many connections on the server. Or: Reserve
this many threads in the kernel of the client. The documentation in the
reference/man page is equally confused.Also, the term "multi" is redundant, because whether it is multi or single
is obviously determined by the value of NUM.
After reviewing the discussion and the implementation, I would say "workers"
would be the best description of the feature, but unfortunately the options -w
or -W are not available. I'd also avoid -n or -N for "num..." because pg_dump
already uses -n and -N for something else, and we are now trying to avoid
inconsistent options between these programs. Also, option names usually don't
start with units (imagine --num-shared-buffers or --num-port).
While I think "jobs" isn't a totally accurate description, I would still
propose to use -j/--jobs for the option name, because it is neutral about the
implementation and has a strong precedent as being used to increase the
parallelization to get the work done faster. I also noticed that Andrew D.
used "jobs" in his own emails to comment on the feature. :-)
The attached patch also updated the documentation to give some additional
advice about which numbers to use.
Attachments:
pg_restore-jobs.difftext/x-patch; charset=UTF-8; name=pg_restore-jobs.diffDownload
Index: doc/src/sgml/ref/pg_restore.sgml
===================================================================
RCS file: /cvsroot/pgsql/doc/src/sgml/ref/pg_restore.sgml,v
retrieving revision 1.80
diff -u -3 -p -r1.80 pg_restore.sgml
--- doc/src/sgml/ref/pg_restore.sgml 26 Feb 2009 16:02:37 -0000 1.80
+++ doc/src/sgml/ref/pg_restore.sgml 19 Mar 2009 21:18:32 -0000
@@ -216,6 +216,46 @@
</varlistentry>
<varlistentry>
+ <term><option>-j <replaceable class="parameter">number-of-jobs</replaceable></option></term>
+ <term><option>--jobs=<replaceable class="parameter">number-of-jobs</replaceable></option></term>
+ <listitem>
+ <para>
+ Run the most time-consuming parts
+ of <application>pg_restore</> — those which load data,
+ create indexes, or create constraints — using multiple
+ concurrent jobs. This option can dramatically reduce the time
+ to restore a large database to a server running on a
+ multi-processor machine.
+ </para>
+
+ <para>
+ Each job is one process or one thread, depending on the
+ operating system, and uses a separate connection to the
+ server.
+ </para>
+
+ <para>
+ The optimal value for this option depends on the hardware
+ setup of the server, of the client, and of the network.
+ Factors include the number of CPU cores and the disk setup. A
+ good place to start is the number of CPU cores on the server,
+ but values larger than that can also lead to faster restore
+ times in many cases. Of course, values that are too high will
+ lead to decreasing performance because of thrashing.
+ </para>
+
+ <para>
+ Only the custom archive format is supported with this option.
+ The input file must be a regular file (not, for example, a
+ pipe). This option is ignored when emitting a script rather
+ than connecting directly to a database server. Also, multiple
+ jobs cannot be used together with the
+ option <option>--single-transaction</option>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-l</option></term>
<term><option>--list</option></term>
<listitem>
@@ -242,28 +282,6 @@
</varlistentry>
<varlistentry>
- <term><option>-m <replaceable class="parameter">number-of-threads</replaceable></option></term>
- <term><option>--multi-thread=<replaceable class="parameter">number-of-threads</replaceable></option></term>
- <listitem>
- <para>
- Run the most time-consuming parts of <application>pg_restore</>
- — those which load data, create indexes, or create
- constraints — using multiple concurrent connections to the
- database. This option can dramatically reduce the time to restore a
- large database to a server running on a multi-processor machine.
- </para>
-
- <para>
- This option is ignored when emitting a script rather than connecting
- directly to a database server. Multiple threads cannot be used
- together with <option>--single-transaction</option>. Also, the input
- must be a plain file (not, for example, a pipe), and at present only
- the custom archive format is supported.
- </para>
- </listitem>
- </varlistentry>
-
- <varlistentry>
<term><option>-n <replaceable class="parameter">namespace</replaceable></option></term>
<term><option>--schema=<replaceable class="parameter">schema</replaceable></option></term>
<listitem>
Index: src/bin/pg_dump/pg_backup.h
===================================================================
RCS file: /cvsroot/pgsql/src/bin/pg_dump/pg_backup.h,v
retrieving revision 1.50
diff -u -3 -p -r1.50 pg_backup.h
--- src/bin/pg_dump/pg_backup.h 26 Feb 2009 16:02:37 -0000 1.50
+++ src/bin/pg_dump/pg_backup.h 19 Mar 2009 21:18:32 -0000
@@ -139,7 +139,7 @@ typedef struct _restoreOptions
int suppressDumpWarnings; /* Suppress output of WARNING entries
* to stderr */
bool single_txn;
- int number_of_threads;
+ int number_of_jobs;
bool *idWanted; /* array showing which dump IDs to emit */
} RestoreOptions;
Index: src/bin/pg_dump/pg_backup_archiver.c
===================================================================
RCS file: /cvsroot/pgsql/src/bin/pg_dump/pg_backup_archiver.c,v
retrieving revision 1.167
diff -u -3 -p -r1.167 pg_backup_archiver.c
--- src/bin/pg_dump/pg_backup_archiver.c 13 Mar 2009 22:50:44 -0000 1.167
+++ src/bin/pg_dump/pg_backup_archiver.c 19 Mar 2009 21:18:32 -0000
@@ -354,7 +354,7 @@ RestoreArchive(Archive *AHX, RestoreOpti
*
* In parallel mode, turn control over to the parallel-restore logic.
*/
- if (ropt->number_of_threads > 1 && ropt->useDB)
+ if (ropt->number_of_jobs > 1 && ropt->useDB)
restore_toc_entries_parallel(AH);
else
{
@@ -3061,7 +3061,7 @@ static void
restore_toc_entries_parallel(ArchiveHandle *AH)
{
RestoreOptions *ropt = AH->ropt;
- int n_slots = ropt->number_of_threads;
+ int n_slots = ropt->number_of_jobs;
ParallelSlot *slots;
int work_status;
int next_slot;
Index: src/bin/pg_dump/pg_restore.c
===================================================================
RCS file: /cvsroot/pgsql/src/bin/pg_dump/pg_restore.c,v
retrieving revision 1.95
diff -u -3 -p -r1.95 pg_restore.c
--- src/bin/pg_dump/pg_restore.c 11 Mar 2009 03:33:29 -0000 1.95
+++ src/bin/pg_dump/pg_restore.c 19 Mar 2009 21:18:32 -0000
@@ -93,8 +93,8 @@ main(int argc, char **argv)
{"host", 1, NULL, 'h'},
{"ignore-version", 0, NULL, 'i'},
{"index", 1, NULL, 'I'},
+ {"jobs", 1, NULL, 'j'},
{"list", 0, NULL, 'l'},
- {"multi-thread", 1, NULL, 'm'},
{"no-privileges", 0, NULL, 'x'},
{"no-acl", 0, NULL, 'x'},
{"no-owner", 0, NULL, 'O'},
@@ -146,7 +146,7 @@ main(int argc, char **argv)
}
}
- while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:lL:m:n:Op:P:RsS:t:T:U:vwWxX:1",
+ while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:lL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
@@ -181,6 +181,10 @@ main(int argc, char **argv)
/* ignored, deprecated option */
break;
+ case 'j': /* number of restore jobs */
+ opts->number_of_jobs = atoi(optarg);
+ break;
+
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
@@ -189,10 +193,6 @@ main(int argc, char **argv)
opts->tocFile = strdup(optarg);
break;
- case 'm': /* number of restore threads */
- opts->number_of_threads = atoi(optarg);
- break;
-
case 'n': /* Dump data for this schema only */
opts->schemaNames = strdup(optarg);
break;
@@ -318,9 +318,9 @@ main(int argc, char **argv)
}
/* Can't do single-txn mode with multiple connections */
- if (opts->single_txn && opts->number_of_threads > 1)
+ if (opts->single_txn && opts->number_of_jobs > 1)
{
- fprintf(stderr, _("%s: cannot specify both --single-transaction and multiple threads\n"),
+ fprintf(stderr, _("%s: cannot specify both --single-transaction and multiple jobs\n"),
progname);
exit(1);
}
@@ -417,9 +417,9 @@ usage(const char *progname)
printf(_(" -C, --create create the target database\n"));
printf(_(" -e, --exit-on-error exit on error, default is to continue\n"));
printf(_(" -I, --index=NAME restore named index\n"));
+ printf(_(" -j, --jobs=NUM use this many parallel jobs to restore\n"));
printf(_(" -L, --use-list=FILENAME use table of contents from this file for\n"
" selecting/ordering output\n"));
- printf(_(" -m, --multi-thread=NUM use this many parallel connections to restore\n"));
printf(_(" -n, --schema=NAME restore only objects in this schema\n"));
printf(_(" -O, --no-owner skip restoration of object ownership\n"));
printf(_(" -P, --function=NAME(args)\n"
Peter Eisentraut wrote:
While I think "jobs" isn't a totally accurate description, I would still
propose to use -j/--jobs for the option name, because it is neutral about the
implementation and has a strong precedent as being used to increase the
parallelization to get the work done faster. I also noticed that Andrew D.
used "jobs" in his own emails to comment on the feature. :-)The attached patch also updated the documentation to give some additional
advice about which numbers to use.
Looks reasonable.
cheers
andrew