pgsql: Sync pg_dump and pg_dumpall output
Sync pg_dump and pg_dumpall output
Before exiting any files are fsync'ed. A --no-sync option is also
provided for a faster exit if desired.
Michael Paquier.
Reviewed by Albe Laurenz
Discussion: /messages/by-id/CAB7nPqS1uZ=Ov+UruW6jr3vB-S_DLVMPc0dQpV-fTDjmm0ZQMg@mail.gmail.com
Branch
------
master
Details
-------
http://git.postgresql.org/pg/commitdiff/96a7128b7b4c9ce4fb51df8c8b216dfab6340766
Modified Files
--------------
doc/src/sgml/ref/pg_dump.sgml | 14 ++++++++++++++
doc/src/sgml/ref/pg_dumpall.sgml | 14 ++++++++++++++
src/bin/pg_dump/pg_backup.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 15 ++++++++++-----
src/bin/pg_dump/pg_backup_archiver.h | 1 +
src/bin/pg_dump/pg_backup_custom.c | 5 +++++
src/bin/pg_dump/pg_backup_directory.c | 8 ++++++++
src/bin/pg_dump/pg_backup_tar.c | 5 +++++
src/bin/pg_dump/pg_dump.c | 12 ++++++++++--
src/bin/pg_dump/pg_dumpall.c | 15 +++++++++++++++
src/common/file_utils.c | 19 +++++++++++++++++++
src/include/common/file_utils.h | 1 +
12 files changed, 103 insertions(+), 8 deletions(-)
--
Sent via pgsql-committers mailing list (pgsql-committers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers
Andrew,
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.
Thanks!
Stephen
On 03/22/2017 11:39 AM, Stephen Frost wrote:
Andrew,
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.
All of them? The imnpact is not likely to be huge in most cases
(possibly different on Windows). On crake, the bin-check stage actually
took less time after the change than before, so I suspect that the
impact will be pretty small.
Still I agree that we should have tests for both cases.
Michael, do you want to look at that? If not, I'll take a look but it
will probably be next week before I get to it.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andrew,
* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
On 03/22/2017 11:39 AM, Stephen Frost wrote:
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.All of them? The imnpact is not likely to be huge in most cases
(possibly different on Windows). On crake, the bin-check stage actually
took less time after the change than before, so I suspect that the
impact will be pretty small.
Well, perhaps not all, but I'd think --no-sync would be better to have
in most cases. We turn off fsync for all of the TAP tests and all
initdb calls, so it seems like we should here too. Perhaps it won't be
much overhead on an unloaded system, but not all of the buildfarm
animals seem to be on unloaded systems, nor are they particularly fast
in general or have fast i/o..
Still I agree that we should have tests for both cases.
Perhaps, though if I recall correctly, we've basically had zero calls
for fsync() until this. If we don't feel like we need to test that in
the backend then it seems a bit silly to feel like we need it for
pg_dump's regression coverage.
That said, perhaps the right answer is to have a couple tests for both
the backend and for pg_dump which do exercise the fsync-enabled paths.
Thanks!
Stephen
On 03/22/2017 12:10 PM, Stephen Frost wrote:
Still I agree that we should have tests for both cases.
Perhaps, though if I recall correctly, we've basically had zero calls
for fsync() until this. If we don't feel like we need to test that in
the backend then it seems a bit silly to feel like we need it for
pg_dump's regression coverage.That said, perhaps the right answer is to have a couple tests for both
the backend and for pg_dump which do exercise the fsync-enabled paths.
+1
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Mar 23, 2017 at 1:10 AM, Stephen Frost <sfrost@snowman.net> wrote:
Andrew,
* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
On 03/22/2017 11:39 AM, Stephen Frost wrote:
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.All of them? The imnpact is not likely to be huge in most cases
(possibly different on Windows). On crake, the bin-check stage actually
took less time after the change than before, so I suspect that the
impact will be pretty small.Well, perhaps not all, but I'd think --no-sync would be better to have
in most cases. We turn off fsync for all of the TAP tests and all
initdb calls, so it seems like we should here too. Perhaps it won't be
much overhead on an unloaded system, but not all of the buildfarm
animals seem to be on unloaded systems, nor are they particularly fast
in general or have fast i/o..
Agreed.
Still I agree that we should have tests for both cases.
Perhaps, though if I recall correctly, we've basically had zero calls
for fsync() until this. If we don't feel like we need to test that in
the backend then it seems a bit silly to feel like we need it for
pg_dump's regression coverage.That said, perhaps the right answer is to have a couple tests for both
the backend and for pg_dump which do exercise the fsync-enabled paths.
And agreed.
The patch attached uses --no-sync in most places for pg_dump, except
in 4 places of 002_pg_dump.pl to stress as well the sync code path.
--
Michael
Attachments:
test-dump-nosync.patchapplication/octet-stream; name=test-dump-nosync.patchDownload
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 366737440c..5b57300c2a 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -39,6 +39,7 @@ my %pgdump_runs = (
binary_upgrade => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
'--format=custom',
"--file=$tempdir/binary_upgrade.dump",
'-w',
@@ -55,6 +56,7 @@ my %pgdump_runs = (
clean => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/clean.sql",
'--include-subscriptions',
'-c',
@@ -63,6 +65,7 @@ my %pgdump_runs = (
clean_if_exists => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/clean_if_exists.sql",
'--include-subscriptions',
'-c',
@@ -71,12 +74,16 @@ my %pgdump_runs = (
'postgres', ], },
column_inserts => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/column_inserts.sql",
- '-a', '--column-inserts',
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/column_inserts.sql",
+ '-a',
+ '--column-inserts',
'postgres', ], },
createdb => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/createdb.sql",
'--include-subscriptions',
'-C',
@@ -86,6 +93,7 @@ my %pgdump_runs = (
data_only => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/data_only.sql",
'--include-subscriptions',
'-a',
@@ -94,8 +102,13 @@ my %pgdump_runs = (
'-v', # no-op, just make sure it works
'postgres', ], },
defaults => {
- dump_cmd => [ 'pg_dump', '-f', "$tempdir/defaults.sql", 'postgres', ],
- },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ '-f',
+ "$tempdir/defaults.sql",
+ 'postgres', ], },
+ # Do not use --no-sync to give test coverage for data sync.
defaults_custom_format => {
test_key => 'defaults',
dump_cmd => [
@@ -105,6 +118,7 @@ my %pgdump_runs = (
'pg_restore', '-Fc',
"--file=$tempdir/defaults_custom_format.sql",
"$tempdir/defaults_custom_format.dump", ], },
+ # Do not use --no-sync to give test coverage for data sync.
defaults_dir_format => {
test_key => 'defaults',
dump_cmd => [
@@ -114,6 +128,7 @@ my %pgdump_runs = (
'pg_restore', '-Fd',
"--file=$tempdir/defaults_dir_format.sql",
"$tempdir/defaults_dir_format", ], },
+ # Do not use --no-sync to give test coverage for data sync.
defaults_parallel => {
test_key => 'defaults',
dump_cmd => [
@@ -123,6 +138,7 @@ my %pgdump_runs = (
'pg_restore',
"--file=$tempdir/defaults_parallel.sql",
"$tempdir/defaults_parallel", ], },
+ # Do not use --no-sync to give test coverage for data sync.
defaults_tar_format => {
test_key => 'defaults',
dump_cmd => [
@@ -135,17 +151,22 @@ my %pgdump_runs = (
"$tempdir/defaults_tar_format.tar", ], },
exclude_dump_test_schema => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/exclude_dump_test_schema.sql",
- '--exclude-schema=dump_test', 'postgres', ], },
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/exclude_dump_test_schema.sql",
+ '--exclude-schema=dump_test',
+ 'postgres', ], },
exclude_test_table => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/exclude_test_table.sql",
'--exclude-table=dump_test.test_table',
'postgres', ], },
exclude_test_table_data => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/exclude_test_table_data.sql",
'--exclude-table-data=dump_test.test_table',
'--no-unlogged-table-data',
@@ -153,30 +174,52 @@ my %pgdump_runs = (
pg_dumpall_globals => {
dump_cmd => [
'pg_dumpall', '-v',
- "--file=$tempdir/pg_dumpall_globals.sql", '-g', ], },
+ "--file=$tempdir/pg_dumpall_globals.sql", '-g',
+ '--no-sync', ], },
pg_dumpall_globals_clean => {
dump_cmd => [
- 'pg_dumpall', "--file=$tempdir/pg_dumpall_globals_clean.sql",
- '-g', '-c', ], },
+ 'pg_dumpall',
+ "--file=$tempdir/pg_dumpall_globals_clean.sql",
+ '-g',
+ '-c',
+ '--no-sync', ], },
pg_dumpall_dbprivs => {
- dump_cmd =>
- [ 'pg_dumpall', "--file=$tempdir/pg_dumpall_dbprivs.sql", ], },
+ dump_cmd => [
+ 'pg_dumpall',
+ '--no-sync',
+ "--file=$tempdir/pg_dumpall_dbprivs.sql", ], },
no_blobs => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/no_blobs.sql", '-B', 'postgres', ], },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/no_blobs.sql",
+ '-B',
+ 'postgres', ], },
no_privs => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/no_privs.sql", '-x', 'postgres', ], },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/no_privs.sql",
+ '-x',
+ 'postgres', ], },
no_owner => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/no_owner.sql", '-O', 'postgres', ], },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/no_owner.sql",
+ '-O',
+ 'postgres', ], },
only_dump_test_schema => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/only_dump_test_schema.sql",
- '--schema=dump_test', 'postgres', ], },
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/only_dump_test_schema.sql",
+ '--schema=dump_test',
+ 'postgres', ], },
only_dump_test_table => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/only_dump_test_table.sql",
'--table=dump_test.test_table',
'--lock-wait-timeout=1000000',
@@ -184,6 +227,7 @@ my %pgdump_runs = (
role => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/role.sql",
'--role=regress_dump_test_role',
'--schema=dump_test_second_schema',
@@ -192,6 +236,7 @@ my %pgdump_runs = (
test_key => 'role',
dump_cmd => [
'pg_dump',
+ '--no-sync',
'--format=directory',
'--jobs=2',
"--file=$tempdir/role_parallel",
@@ -204,28 +249,29 @@ my %pgdump_runs = (
schema_only => {
dump_cmd => [
'pg_dump', '--format=plain', "--file=$tempdir/schema_only.sql",
- '-s', 'postgres', ], },
+ '--no-sync', '-s', 'postgres', ], },
section_pre_data => {
dump_cmd => [
'pg_dump', "--file=$tempdir/section_pre_data.sql",
'--include-subscriptions',
- '--section=pre-data', 'postgres', ], },
+ '--section=pre-data', '--no-sync', 'postgres', ], },
section_data => {
dump_cmd => [
'pg_dump', "--file=$tempdir/section_data.sql",
- '--section=data', 'postgres', ], },
+ '--section=data', '--no-sync', 'postgres', ], },
section_post_data => {
dump_cmd => [
'pg_dump', "--file=$tempdir/section_post_data.sql",
- '--section=post-data', 'postgres', ], },
+ '--section=post-data', '--no-sync', 'postgres', ], },
test_schema_plus_blobs => {
dump_cmd => [
'pg_dump', "--file=$tempdir/test_schema_plus_blobs.sql",
- '--schema=dump_test', '-b', '-B', 'postgres', ], },
+
+ '--schema=dump_test', '-b', '-B', '--no-sync', 'postgres', ], },
with_oids => {
dump_cmd => [
'pg_dump', '--oids',
- '--include-subscriptions',
+ '--include-subscriptions', '--no-sync',
"--file=$tempdir/with_oids.sql", 'postgres', ], },);
###############################################################
diff --git a/src/bin/pg_dump/t/010_dump_connstr.pl b/src/bin/pg_dump/t/010_dump_connstr.pl
index 81b5779248..55944d10fa 100644
--- a/src/bin/pg_dump/t/010_dump_connstr.pl
+++ b/src/bin/pg_dump/t/010_dump_connstr.pl
@@ -59,29 +59,29 @@ $node->command_ok(
'-U', $dbname4 ],
'pg_dumpall with long ASCII name 1');
$node->command_ok(
- [ 'pg_dumpall', '-r', '-f', $discard, '--dbname',
+ [ 'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
$node->connstr($dbname2),
'-U', $dbname3 ],
'pg_dumpall with long ASCII name 2');
$node->command_ok(
- [ 'pg_dumpall', '-r', '-f', $discard, '--dbname',
+ [ 'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
$node->connstr($dbname3),
'-U', $dbname2 ],
'pg_dumpall with long ASCII name 3');
$node->command_ok(
- [ 'pg_dumpall', '-r', '-f', $discard, '--dbname',
+ [ 'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
$node->connstr($dbname4),
'-U', $dbname1 ],
'pg_dumpall with long ASCII name 4');
$node->command_ok(
- [ 'pg_dumpall', '-r', '-l', 'dbname=template1' ],
+ [ 'pg_dumpall', '--no-sync', '-r', '-l', 'dbname=template1' ],
'pg_dumpall -l accepts connection string');
$node->run_log([ 'createdb', "foo\n\rbar" ]);
# not sufficient to use -r here
$node->command_fails(
- [ 'pg_dumpall', '-f', $discard ],
+ [ 'pg_dumpall', '--no-sync', '-f', $discard ],
'pg_dumpall with \n\r in database name');
$node->run_log([ 'dropdb', "foo\n\rbar" ]);
@@ -91,7 +91,7 @@ $node->safe_psql($dbname1, 'CREATE TABLE t0()');
# XXX no printed message when this fails, just SIGPIPE termination
$node->command_ok(
- [ 'pg_dump', '-Fd', '-j2', '-f', $dirfmt,
+ [ 'pg_dump', '-Fd', '--no-sync', '-j2', '-f', $dirfmt,
'-U', $dbname1, $node->connstr($dbname1) ],
'parallel dump');
@@ -112,7 +112,7 @@ $node->command_ok(
'parallel restore with create');
-$node->command_ok([ 'pg_dumpall', '-f', $plain, '-U', $dbname1 ],
+$node->command_ok([ 'pg_dumpall', '--no-sync', '-f', $plain, '-U', $dbname1 ],
'take full dump');
system_log('cat', $plain);
my ($stderr, $result);
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index cbc5259550..841da034b0 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -170,7 +170,7 @@ createdb "$dbname2" || createdb_status=$?
createdb "$dbname3" || createdb_status=$?
if "$MAKE" -C "$oldsrc" installcheck; then
- pg_dumpall -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
fix_sql=""
@@ -221,7 +221,7 @@ case $testhost in
*) sh ./analyze_new_cluster.sh ;;
esac
-pg_dumpall -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
# no need to echo commands anymore
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index 7b3955aac9..3e45ccb005 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -41,16 +41,21 @@ my $tempdir_short = TestLib::tempdir_short;
my %pgdump_runs = (
binary_upgrade => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/binary_upgrade.sql",
- '--schema-only', '--binary-upgrade',
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/binary_upgrade.sql",
+ '--schema-only',
+ '--binary-upgrade',
'--dbname=postgres', ], },
clean => {
dump_cmd => [
'pg_dump', "--file=$tempdir/clean.sql",
- '-c', '--dbname=postgres', ], },
+ '-c', '--no-sync',
+ '--dbname=postgres', ], },
clean_if_exists => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/clean_if_exists.sql",
'-c',
'--if-exists',
@@ -58,12 +63,16 @@ my %pgdump_runs = (
'postgres', ], },
column_inserts => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/column_inserts.sql",
- '-a', '--column-inserts',
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/column_inserts.sql",
+ '-a',
+ '--column-inserts',
'postgres', ], },
createdb => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/createdb.sql",
'-C',
'-R', # no-op, just for testing
@@ -71,6 +80,7 @@ my %pgdump_runs = (
data_only => {
dump_cmd => [
'pg_dump',
+ '--no-sync',
"--file=$tempdir/data_only.sql",
'-a',
'-v', # no-op, just make sure it works
@@ -81,7 +91,7 @@ my %pgdump_runs = (
defaults_custom_format => {
test_key => 'defaults',
dump_cmd => [
- 'pg_dump', '-Fc', '-Z6',
+ 'pg_dump', '--no-sync', '-Fc', '-Z6',
"--file=$tempdir/defaults_custom_format.dump", 'postgres', ],
restore_cmd => [
'pg_restore',
@@ -90,7 +100,7 @@ my %pgdump_runs = (
defaults_dir_format => {
test_key => 'defaults',
dump_cmd => [
- 'pg_dump', '-Fd',
+ 'pg_dump', '--no-sync', '-Fd',
"--file=$tempdir/defaults_dir_format", 'postgres', ],
restore_cmd => [
'pg_restore',
@@ -99,8 +109,8 @@ my %pgdump_runs = (
defaults_parallel => {
test_key => 'defaults',
dump_cmd => [
- 'pg_dump', '-Fd', '-j2', "--file=$tempdir/defaults_parallel",
- 'postgres', ],
+ 'pg_dump', '--no-sync', '-Fd', '-j2',
+ "--file=$tempdir/defaults_parallel", 'postgres', ],
restore_cmd => [
'pg_restore',
"--file=$tempdir/defaults_parallel.sql",
@@ -108,37 +118,60 @@ my %pgdump_runs = (
defaults_tar_format => {
test_key => 'defaults',
dump_cmd => [
- 'pg_dump', '-Ft',
+ 'pg_dump', '--no-sync', '-Ft',
"--file=$tempdir/defaults_tar_format.tar", 'postgres', ],
restore_cmd => [
'pg_restore',
"--file=$tempdir/defaults_tar_format.sql",
"$tempdir/defaults_tar_format.tar", ], },
pg_dumpall_globals => {
- dump_cmd =>
- [ 'pg_dumpall', "--file=$tempdir/pg_dumpall_globals.sql", '-g', ],
+ dump_cmd => [
+ 'pg_dumpall',
+ '--no-sync',
+ "--file=$tempdir/pg_dumpall_globals.sql",
+ '-g', ],
},
no_privs => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/no_privs.sql", '-x', 'postgres', ], },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/no_privs.sql",
+ '-x',
+ 'postgres', ], },
no_owner => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/no_owner.sql", '-O', 'postgres', ], },
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/no_owner.sql",
+ '-O',
+ 'postgres', ], },
schema_only => {
- dump_cmd =>
- [ 'pg_dump', "--file=$tempdir/schema_only.sql", '-s', 'postgres', ],
+ dump_cmd => [
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/schema_only.sql",
+ '-s',
+ 'postgres', ],
},
section_pre_data => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/section_pre_data.sql",
- '--section=pre-data', 'postgres', ], },
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/section_pre_data.sql",
+ '--section=pre-data',
+ 'postgres', ], },
section_data => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/section_data.sql",
- '--section=data', 'postgres', ], },
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/section_data.sql",
+ '--section=data',
+ 'postgres', ], },
section_post_data => {
dump_cmd => [
- 'pg_dump', "--file=$tempdir/section_post_data.sql",
+ 'pg_dump',
+ '--no-sync',
+ "--file=$tempdir/section_post_data.sql",
'--section=post-data', 'postgres', ], },);
###############################################################
On Fri, Mar 24, 2017 at 12:26:33PM +0900, Michael Paquier wrote:
On Thu, Mar 23, 2017 at 1:10 AM, Stephen Frost <sfrost@snowman.net> wrote:
* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
On 03/22/2017 11:39 AM, Stephen Frost wrote:
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.All of them? The imnpact is not likely to be huge in most cases
(possibly different on Windows). On crake, the bin-check stage actually
took less time after the change than before, so I suspect that the
impact will be pretty small.Well, perhaps not all, but I'd think --no-sync would be better to have
in most cases. We turn off fsync for all of the TAP tests and all
initdb calls, so it seems like we should here too. Perhaps it won't be
much overhead on an unloaded system, but not all of the buildfarm
animals seem to be on unloaded systems, nor are they particularly fast
in general or have fast i/o..Agreed.
Still I agree that we should have tests for both cases.
Perhaps, though if I recall correctly, we've basically had zero calls
for fsync() until this. If we don't feel like we need to test that in
the backend then it seems a bit silly to feel like we need it for
pg_dump's regression coverage.That said, perhaps the right answer is to have a couple tests for both
the backend and for pg_dump which do exercise the fsync-enabled paths.And agreed.
The patch attached uses --no-sync in most places for pg_dump, except
in 4 places of 002_pg_dump.pl to stress as well the sync code path.
[Action required within three days. This is a generic notification.]
The above-described topic is currently a PostgreSQL 10 open item. Andrew,
since you committed the patch believed to have created it, you own this open
item. If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know. Otherwise, please observe the policy on
open item ownership[1]/messages/by-id/20170404140717.GA2675809@tornado.leadboat.com and send a status update within three calendar days of
this message. Include a date for your subsequent status update. Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10. Consequently, I will appreciate your efforts
toward speedy resolution. Thanks.
[1]: /messages/by-id/20170404140717.GA2675809@tornado.leadboat.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Apr 05, 2017 at 02:49:41AM -0400, Noah Misch wrote:
On Fri, Mar 24, 2017 at 12:26:33PM +0900, Michael Paquier wrote:
On Thu, Mar 23, 2017 at 1:10 AM, Stephen Frost <sfrost@snowman.net> wrote:
* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
On 03/22/2017 11:39 AM, Stephen Frost wrote:
* Andrew Dunstan (andrew@dunslane.net) wrote:
Sync pg_dump and pg_dumpall output
This probably should have adjusted all callers of pg_dump in the
regression tests to use the --no-sync option, otherwise we'll end up
spending possibly a good bit of time calling fsync() during the
regression tests unnecessairly.All of them? The imnpact is not likely to be huge in most cases
(possibly different on Windows). On crake, the bin-check stage actually
took less time after the change than before, so I suspect that the
impact will be pretty small.Well, perhaps not all, but I'd think --no-sync would be better to have
in most cases. We turn off fsync for all of the TAP tests and all
initdb calls, so it seems like we should here too. Perhaps it won't be
much overhead on an unloaded system, but not all of the buildfarm
animals seem to be on unloaded systems, nor are they particularly fast
in general or have fast i/o..Agreed.
Still I agree that we should have tests for both cases.
Perhaps, though if I recall correctly, we've basically had zero calls
for fsync() until this. If we don't feel like we need to test that in
the backend then it seems a bit silly to feel like we need it for
pg_dump's regression coverage.That said, perhaps the right answer is to have a couple tests for both
the backend and for pg_dump which do exercise the fsync-enabled paths.And agreed.
The patch attached uses --no-sync in most places for pg_dump, except
in 4 places of 002_pg_dump.pl to stress as well the sync code path.[Action required within three days. This is a generic notification.]
The above-described topic is currently a PostgreSQL 10 open item. Andrew,
since you committed the patch believed to have created it, you own this open
item. If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know. Otherwise, please observe the policy on
open item ownership[1] and send a status update within three calendar days of
this message. Include a date for your subsequent status update. Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10. Consequently, I will appreciate your efforts
toward speedy resolution. Thanks.[1] /messages/by-id/20170404140717.GA2675809@tornado.leadboat.com
This PostgreSQL 10 open item is past due for your status update. Kindly send
a status update within 24 hours, and include a date for your subsequent status
update. Refer to the policy on open item ownership:
/messages/by-id/20170404140717.GA2675809@tornado.leadboat.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Apr 10, 2017 at 1:01 PM, Noah Misch <noah@leadboat.com> wrote:
This PostgreSQL 10 open item is past due for your status update. Kindly send
a status update within 24 hours, and include a date for your subsequent status
update. Refer to the policy on open item ownership:
/messages/by-id/20170404140717.GA2675809@tornado.leadboat.com
Note for reviewers and committers: the patch sent upthread in
/messages/by-id/CAB7nPqTUOpF792rDOnBkswZ=ZgHwxdB01OQU2tAF1KU4iUuLrw@mail.gmail.com
still applies and passes all the tests.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 04/10/2017 12:43 AM, Michael Paquier wrote:
On Mon, Apr 10, 2017 at 1:01 PM, Noah Misch <noah@leadboat.com> wrote:
This PostgreSQL 10 open item is past due for your status update. Kindly send
a status update within 24 hours, and include a date for your subsequent status
update. Refer to the policy on open item ownership:
/messages/by-id/20170404140717.GA2675809@tornado.leadboat.comNote for reviewers and committers: the patch sent upthread in
/messages/by-id/CAB7nPqTUOpF792rDOnBkswZ=ZgHwxdB01OQU2tAF1KU4iUuLrw@mail.gmail.com
still applies and passes all the tests.
Sorry, previous notification flew by me. I Have looked at the patch. On
the surface it looks fine and I will apply after running a test, should
be today.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers