TAP / recovery-test fs-level backups, psql enhancements etc
Hi all
I've been working with the new TAP tests for recovery and have a number of
enhancements I'd like to make to the tooling to make writing tests easier
and nicer. I've also included two improvements proposed by Kyotaro
HORIGUCHI in the prior thread about the now-committed TAP recovery tests.
I developed these changes as part of testing failover slots and logical
decoding timeline following, where I found a need for better control over
psql, the ability to make filesystem level backups, etc. It doesn't make
sense to try to jam all that into my test script when it belongs in the
infrastructure.
Patches attached, each explains what it does and what for.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-Allow-multiple-temp-config-arguments-to-pg_regress.patchtext/x-patch; charset=US-ASCII; name=0001-Allow-multiple-temp-config-arguments-to-pg_regress.patchDownload
From 26fdff1b8f76f3c47d4e19be7c4aef3cdcd3393c Mon Sep 17 00:00:00 2001
From: Andrew Dunstan <andrew@dunslane.net>
Date: Sun, 28 Feb 2016 09:38:43 -0500
Subject: [PATCH 1/7] Allow multiple --temp-config arguments to pg_regress
This means that if, for example, TEMP_CONFIG is set and a Makefile
explicitly sets a temp-config file, both will now be used.
Patch from John Gorman.
---
src/test/regress/pg_regress.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
index a1902fe..416829d 100644
--- a/src/test/regress/pg_regress.c
+++ b/src/test/regress/pg_regress.c
@@ -80,7 +80,7 @@ static char *encoding = NULL;
static _stringlist *schedulelist = NULL;
static _stringlist *extra_tests = NULL;
static char *temp_instance = NULL;
-static char *temp_config = NULL;
+static _stringlist *temp_configs = NULL;
static bool nolocale = false;
static bool use_existing = false;
static char *hostname = NULL;
@@ -2117,7 +2117,7 @@ regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc
split_to_stringlist(strdup(optarg), ", ", &extraroles);
break;
case 19:
- temp_config = strdup(optarg);
+ add_stringlist_item(&temp_configs, optarg);
break;
case 20:
use_existing = true;
@@ -2249,8 +2249,9 @@ regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc
fputs("log_temp_files = 128kB\n", pg_conf);
fputs("max_prepared_transactions = 2\n", pg_conf);
- if (temp_config != NULL)
+ for (sl = temp_configs; sl != NULL; sl = sl->next)
{
+ char *temp_config = sl->str;
FILE *extra_conf;
char line_buf[1024];
--
2.1.0
0002-TAP-Add-filtering-to-RecursiveCopy.patchtext/x-patch; charset=US-ASCII; name=0002-TAP-Add-filtering-to-RecursiveCopy.patchDownload
From 8fd42e646327c2d18c102c87f8785d17145913c3 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:06:47 +0800
Subject: [PATCH 2/7] TAP: Add filtering to RecursiveCopy
Allow RecursiveCopy to accept a filter function so callers can
exclude unwanted files. Also POD-ify it.
---
src/test/perl/RecursiveCopy.pm | 76 ++++++++++++++++++++++++++++++++++++++----
1 file changed, 70 insertions(+), 6 deletions(-)
diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm
index 9362aa8..97f84e5 100644
--- a/src/test/perl/RecursiveCopy.pm
+++ b/src/test/perl/RecursiveCopy.pm
@@ -1,4 +1,18 @@
-# RecursiveCopy, a simple recursive copy implementation
+
+=pod
+
+=head1 NAME
+
+RecursiveCopy - simple recursive copy implementation
+
+=head1 SYNOPSIS
+
+use RecursiveCopy;
+
+RecursiveCopy::copypath($from, $to);
+
+=cut
+
package RecursiveCopy;
use strict;
@@ -7,10 +21,56 @@ use warnings;
use File::Basename;
use File::Copy;
+=pod
+
+=head2 copypath($from, $to)
+
+Copy all files and directories from $from to $to. Raises an exception
+if a file would be overwritten, the source dir can't be read, or any
+I/O operation fails. Always returns true. On failure the copy may be
+in some incomplete state; no cleanup is attempted.
+
+If the keyword param 'filterfn' is defined it's invoked as a sub that
+returns true if the file/directory should be copied, false otherwise.
+The passed path is the full path to the file relative to the source
+directory.
+
+e.g.
+
+RecursiveCopy::copypath('/some/path', '/empty/dir',
+ filterfn => sub {^
+ # omit children of pg_log
+ my $src = shift;
+ return ! $src ~= /\/pg_log\//
+ }
+);
+
+=cut
+
sub copypath
{
- my $srcpath = shift;
- my $destpath = shift;
+ my ($srcpath, $destpath, %params) = @_;
+
+ die("if specified, 'filterfn' must be a sub ref")
+ if defined $params{filterfn} && !ref $params{filterfn};
+
+ my $filterfn;
+ if (defined $params{filterfn})
+ {
+ $filterfn = $params{filterfn};
+ }
+ else
+ {
+ $filterfn = sub { return 1; };
+ }
+
+ return _copypath_recurse($srcpath, $destpath, $filterfn);
+}
+
+# Recursive private guts of copypath
+sub _copypath_recurse
+{
+ my ($srcpath, $destpath, $filterfn) = @_;
die "Cannot operate on symlinks" if -l $srcpath or -l $destpath;
@@ -19,8 +79,11 @@ sub copypath
die "Destination path $destpath exists as file" if -f $destpath;
if (-f $srcpath)
{
- copy($srcpath, $destpath)
- or die "copy $srcpath -> $destpath failed: $!";
+ if ($filterfn->($srcpath))
+ {
+ copy($srcpath, $destpath)
+ or die "copy $srcpath -> $destpath failed: $!";
+ }
return 1;
}
@@ -32,7 +95,8 @@ sub copypath
while (my $entry = readdir($directory))
{
next if ($entry eq '.' || $entry eq '..');
- RecursiveCopy::copypath("$srcpath/$entry", "$destpath/$entry")
+ RecursiveCopy::_copypath_recurse("$srcpath/$entry",
+ "$destpath/$entry", $filterfn)
or die "copypath $srcpath/$entry -> $destpath/$entry failed";
}
closedir($directory);
--
2.1.0
0003-TAP-Add-easier-more-flexible-ways-to-invoke-psql.patchtext/x-patch; charset=US-ASCII; name=0003-TAP-Add-easier-more-flexible-ways-to-invoke-psql.patchDownload
From 6ad042a28bf61902765c14f1c48c72251d98a3ea Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:08:21 +0800
Subject: [PATCH 3/7] TAP: Add easier, more flexible ways to invoke psql
The PostgresNode::psql method is limited - it offers no access
to the return code from psql, ignores SQL errors, and offers
no access to psql's stderr.
Provide a new psql_expert that addresses those limitations and
can be used more flexibly - see the embedded PerlDoc for details.
Also add a new psql_check method that invokes psql and dies if
the SQL fails with any error. Test scripts should use this so
they automatically die if SQL that should succeed fails instead;
with the current psql method such failures would go undetected.
---
src/test/perl/PostgresNode.pm | 233 +++++++++++++++++++++++++++++++++++++++---
1 file changed, 221 insertions(+), 12 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index a8e6f0c..c18c94a 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -21,8 +21,20 @@ PostgresNode - class representing PostgreSQL server instance
$node->restart('fast');
# run a query with psql
- # like: psql -qAXt postgres -c 'SELECT 1;'
- $psql_stdout = $node->psql('postgres', 'SELECT 1');
+ # like:
+ # echo 'SELECT 1' | psql -qAXt postgres -v ON_ERROR_STOP=1
+ $psql_stdout = $node->psql_check('postgres', 'SELECT 1');
+
+ # Run psql with a timeout, capturing stdout and stderr
+ # as well as the psql exit code. Pass some extra psql
+ # options. If there's an error from psql raise an exception.
+ my ($stdout, $stderr, $timed_out);
+ my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'],
+ on_error_die => 1)
+ print "Sleep timed out" if $timed_out;
# run query every second until it returns 't'
# or times out
@@ -69,6 +81,7 @@ use IPC::Run;
use RecursiveCopy;
use Test::More;
use TestLib ();
+use Scalar::Util qw(blessed);
our @EXPORT = qw(
get_new_node
@@ -780,11 +793,16 @@ sub teardown_node
=item $node->psql(dbname, sql)
-Run a query with psql and return stdout, or on error print stderr.
+Run a query with psql and return stdout if psql returns with no error.
-Executes a query/script with psql and returns psql's standard output. psql is
-run in unaligned tuples-only quiet mode with psqlrc disabled so simple queries
-will just return the result row(s) with fields separated by commas.
+psql is run in unaligned tuples-only quiet mode with psqlrc disabled so simple
+queries will just return the result row(s) with fields separated by commas.
+
+If any stderr output is generated it is printed and discarded.
+
+Nonzero return codes from psql are ignored and discarded.
+
+Use psql_expert for more control.
=cut
@@ -793,24 +811,215 @@ sub psql
my ($self, $dbname, $sql) = @_;
my ($stdout, $stderr);
+
my $name = $self->name;
print("### Running SQL command on node \"$name\": $sql\n");
- IPC::Run::run [ 'psql', '-XAtq', '-d', $self->connstr($dbname), '-f',
- '-' ], '<', \$sql, '>', \$stdout, '2>', \$stderr
- or die;
+ # Run the command, ignoring errors
+ $self->psql_expert($dbname, $sql, stdout => \$stdout, stderr => \$stderr,
+ die_on_error => 0, on_error_stop => 0);
+
+ if ($stderr ne "")
+ {
+ print "#### Begin standard error\n";
+ print $stderr;
+ print "\n#### End standard error\n";
+ }
+ return $stdout;
+}
+
+=pod $node->psql_check($dbname, $sql) => stdout
+
+Invoke 'psql' to run 'sql' on 'dbname' and return its stdout on success.
+Die if the SQL produces an error. Runs with ON_ERROR_STOP set.
+
+Takes optional extra params like timeout and timed_out parameters with the same
+options as psql_expert.
+=cut
+
+sub psql_check
+{
+ my ($self, $dbname, $sql, %params) = @_;
+
+ my ($stdout, $stderr);
+
+ my $ret = $self->psql_expert($dbname, $sql,
+ %params,
+ stdout => \$stdout, stderr => \$stderr,
+ on_error_die => 1, on_error_stop => 1);
+
+ # psql can emit stderr from NOTICEs etc
if ($stderr ne "")
{
print "#### Begin standard error\n";
print $stderr;
- print "#### End standard error\n";
+ print "\n#### End standard error\n";
}
- chomp $stdout;
- $stdout =~ s/\r//g if $Config{osname} eq 'msys';
+
return $stdout;
}
+=pod $node->psql_expert($dbname, $sql, %params) => psql_retval
+
+Invoke 'psql' to run 'sql' on 'dbname' and return the return value from
+psql, which is run with on_error_stop by default so that it will stop running
+sql and return 3 if the passed SQL results in an error.
+
+psql is invoked in tuples-only unaligned mode with reading of psqlrc disabled. That
+may be overridden by passing extra psql parameters.
+
+stdout and stderr are transformed to unix line endings if on Windows and any
+trailing newline is removed.
+
+Dies on failure to invoke psql but not if psql exits with a nonzero return code
+(unless on_error_die specified). Dies if psql exits with a signal.
+
+=over
+
+=item stdout => \$stdout
+
+If a scalar to write stdout to is passed as the keyword parameter 'stdout' it
+will be set to psql's stdout.
+
+=item stderr => \$stderr
+
+Same as 'stdout' but gets stderr. If the same scalar is passed for both stdout
+and stderr the results may be interleaved unpredictably.
+
+=item on_error_stop => 1
+
+By default psql_expert invokes psql with ON_ERROR_STOP=1 set so it will
+stop executing SQL at the first error and return exit code 2. If you want
+to ignore errors, pass 0 to on_error_stop.
+
+=item on_error_die => 0
+
+By default psql_expert returns psql's result code. Pass on_error_die to instead
+die with an informative message.
+
+=item timeout => 'interval'
+
+Set a timeout for the psql call as an interval accepted by IPC::Run::timer.
+Integer seconds is fine. psql_expert dies with an exception on timeout unless
+the timed_out parameter is passed.
+
+=item timed_out => \$timed_out
+
+Keyword parameter. Pass a scalar reference and it'll be set to true if the psql
+call timed out instead of dying. Has no effect unless 'timeout' is set.
+
+=item extra_params => ['--single-transaction']
+
+Pass additional parameters to psql. Must be an arrayref.
+
+=back
+
+e.g.
+
+ my ($stdout, $stderr, $timed_out);
+ my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'])
+
+will set $cmdret to undef and $timed_out to a true value.
+
+ $psql_expert('postgres', $sql, on_error_die => 1);
+
+dies with an informative message if $sql fails.
+
+=cut
+
+sub psql_expert
+{
+ my ($self, $dbname, $sql, %params) = @_;
+
+ my $stdout = $params{stdout};
+ my $stderr = $params{stderr};
+ my $timeout = undef;
+ my $timeout_exception = 'psql timed out';
+ my @psql_params = ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
+
+ $params{on_error_stop} = 1 unless defined $params{on_error_stop};
+ $params{on_error_die} = 0 unless defined $params{on_error_die};
+
+ push @psql_params, '-v', 'ON_ERROR_STOP=1' if $params{on_error_stop};
+ push @psql_params, @{$params{extra_params}} if defined $params{extra_params};
+
+ $timeout = IPC::Run::timeout( $params{timeout}, exception => $timeout_exception)
+ if (defined($params{timeout}));
+
+ # IPC::Run would otherwise append to existing contents:
+ $$stdout = "" if ref($stdout);
+ $$stderr = "" if ref($stderr);
+
+ my $ret;
+
+ # Perl's exception handling is ... interesting. Especially since we have to
+ # support 5.8.8. So we hide that from the caller, returning true/false for
+ # timeout instead. See
+ # http://search.cpan.org/~ether/Try-Tiny-0.24/lib/Try/Tiny.pm for
+ # background.
+ my $error = do {
+ local $@;
+ eval {
+ IPC::Run::run \@psql_params, '<', \$sql, '>', $stdout, '2>', $stderr, $timeout;
+ $ret = $?;
+ };
+ my $exc_save = $@;
+ if ($exc_save) {
+ # IPC::Run::run threw an exception. re-throw unless it's a
+ # timeout, which we'll handle by testing is_expired
+ if (blessed($exc_save) || $exc_save ne $timeout_exception) {
+ print "Exception from IPC::Run::run when invoking psql: '$exc_save'\n";
+ die $exc_save;
+ } else {
+ $ret = undef;
+
+ die "Got timeout exception '$exc_save' but timer not expired?!"
+ unless $timeout->is_expired;
+
+ if (defined($params{timed_out}))
+ {
+ ${$params{timed_out}} = 1;
+ } else {
+ die "psql timed out while running '@psql_params', stderr '$$stderr'";
+ }
+ }
+ }
+ };
+
+ chomp $$stdout;
+ $$stdout =~ s/\r//g if $Config{osname} eq 'msys';
+
+ chomp $$stderr;
+ $$stderr =~ s/\r//g if $Config{osname} eq 'msys';
+
+ # See http://perldoc.perl.org/perlvar.html#%24CHILD_ERROR
+ # We don't use IPC::Run::Simple to limit dependencies.
+ #
+ # We always die on signal. If someone wants to capture signals
+ # to psql we can return it with another reference out parameter.
+ die "psql exited with signal " . ($ret & 127) . ": '$$stderr' while running '@psql_params'"
+ if $ret & 127;
+ die "psql exited with core dump: '$$stderr' while running '@psql_params'"
+ if $ret & 128;
+ $ret = $ret >> 8;
+
+ if ($ret && $params{on_error_die}) {
+ die "psql command line syntax error or internal error: '$$stderr' while running '@psql_params'"
+ if $ret == 1;
+ die "psql connection error: '$$stderr' while running '@psql_params'"
+ if $ret == 2;
+ die "error when running passed SQL: '$$stderr' while running '@psql_params'"
+ if $ret == 3;
+ die "unexpected error code $ret from psql: '$$stderr' while running '@psql_params'";
+ }
+
+ return $ret;
+}
+
=pod
=item $node->poll_query_until(dbname, query)
--
2.1.0
0004-TAP-Add-support-for-taking-filesystem-level-backups.patchtext/x-patch; charset=US-ASCII; name=0004-TAP-Add-support-for-taking-filesystem-level-backups.patchDownload
From ee832b9bc7b240b279f0228e624356ca3fd63a27 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:21:25 +0800
Subject: [PATCH 4/7] TAP: Add support for taking filesystem level backups
---
src/test/perl/PostgresNode.pm | 83 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 71 insertions(+), 12 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index c18c94a..099bb89 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -463,11 +463,81 @@ sub backup
my $port = $self->port;
my $name = $self->name;
- print "# Taking backup $backup_name from node \"$name\"\n";
+ print "# Taking pg_basebackup $backup_name from node \"$name\"\n";
TestLib::system_or_bail("pg_basebackup -D $backup_path -p $port -x");
print "# Backup finished\n";
}
+=item $node->backup_fs_hot(backup_name)
+
+Create a backup with a filesystem level copy in $node->backup_dir,
+including transaction logs. Archiving must be enabled as pg_start_backup
+and pg_stop_backup are used. This is not checked or enforced.
+
+The backup name is passed as the backup label to pg_start_backup.
+
+=cut
+
+sub backup_fs_hot
+{
+ my ($self, $backup_name) = @_;
+ $self->_backup_fs($backup_name, 1);
+}
+
+=item $node->backup_fs_cold(backup_name)
+
+Create a backup with a filesystem level copy in $node->backup dir,
+including transaction logs. The server must be stopped as no
+attempt to handle concurrent writes is made.
+
+Use backup or backup_fs_hot if you want to back up a running
+server.
+
+=cut
+
+sub backup_fs_cold
+{
+ my ($self, $backup_name) = @_;
+ $self->_backup_fs($backup_name, 0);
+}
+
+
+# Common sub of backup_fs_hot and backup_fs_cold
+sub _backup_fs
+{
+ my ($self, $backup_name, $hot) = @_;
+ my $backup_path = $self->backup_dir . '/' . $backup_name;
+ my $port = $self->port;
+ my $name = $self->name;
+
+ print "# Taking filesystem level backup $backup_name from node \"$name\"\n";
+
+ if ($hot) {
+ my $stdout = $self->psql_check('postgres', "SELECT * FROM pg_start_backup('$backup_name');");
+ print "# pg_start_backup: $stdout\n";
+ }
+
+ RecursiveCopy::copypath($self->data_dir, $backup_path,
+ filterfn => sub {
+ my $src = shift;
+ return $src !~ /\/pg_log\// && $src !~ /\/postmaster.pid$/;
+ }
+ );
+
+ if ($hot)
+ {
+ # We ignore pg_stop_backup's return value. We also assume archiving
+ # is enabled; otherwise the caller will have to copy the remaining
+ # segments.
+ my $stdout = $self->psql_check('postgres', 'SELECT * FROM pg_stop_backup();');
+ print "# pg_stop_backup: $stdout\n";
+ }
+
+ print "# Backup finished\n";
+}
+
+
+
=pod
=item $node->init_from_backup(root_node, backup_name)
@@ -917,17 +987,6 @@ Pass additional parameters to psql. Must be an arrayref.
e.g.
- my ($stdout, $stderr, $timed_out);
- my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
- stdout => \$stdout, stderr => \$stderr,
- timeout => 30, timed_out => \$timed_out,
- extra_params => ['--single-transaction'])
-
-will set $cmdret to undef and $timed_out to a true value.
-
- $psql_expert('postgres', $sql, on_error_die => 1);
-
-dies with an informative message if $sql fails.
=cut
--
2.1.0
0005-TAP-Suffix-temporary-directories-with-node-name.patchtext/x-patch; charset=US-ASCII; name=0005-TAP-Suffix-temporary-directories-with-node-name.patchDownload
From c30b466bde94452d4b4aad79199b9e8b149785bc Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:27:36 +0800
Subject: [PATCH 5/7] TAP: Suffix temporary directories with node name
Temporary directories for PostgreSQL nodes used in TAP tests now
have the node name appended to them.
By Kyotaro Horiguchi
---
src/test/perl/PostgresNode.pm | 2 +-
src/test/perl/TestLib.pm | 6 ++++--
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 099bb89..4eaa3ed 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -128,7 +128,7 @@ sub new
my $self = {
_port => $pgport,
_host => $pghost,
- _basedir => TestLib::tempdir,
+ _basedir => TestLib::tempdir($name),
_name => $name,
_logfile => "$TestLib::log_path/${testname}_${name}.log" };
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 3d11cbb..8c13655 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -112,9 +112,11 @@ INIT
#
sub tempdir
{
+ my ($prefix) = @_;
+ $prefix = "tmp_test" if (!$prefix);
return File::Temp::tempdir(
- 'tmp_testXXXX',
- DIR => $tmp_check,
+ $prefix . '_XXXX',
+ DIR => $tmp_check,
CLEANUP => 1);
}
--
2.1.0
0006-TAP-Retain-tempdirs-for-failed-tests.patchtext/x-patch; charset=US-ASCII; name=0006-TAP-Retain-tempdirs-for-failed-tests.patchDownload
From 25544852d530dce6ccd9e1e31e966f4031d5417a Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:31:31 +0800
Subject: [PATCH 6/7] TAP: Retain tempdirs for failed tests
If a test fails temporary directories created for that test are retained
- in particular, PostgresNode data directories. This makes debugging
much easier.
By Kyotaro Horiguchi
---
src/test/perl/TestLib.pm | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 8c13655..63a0e77 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -107,6 +107,12 @@ INIT
autoflush TESTLOG 1;
}
+END
+{
+ # Preserve temporary directory for this test on failure
+ $File::Temp::KEEP_ALL = 1 unless Test::More->builder->is_passing;
+}
+
#
# Helper functions
#
--
2.1.0
0007-TAP-Perltidy-PostgresNode.pm.patchtext/x-patch; charset=US-ASCII; name=0007-TAP-Perltidy-PostgresNode.pm.patchDownload
From 80c5e4e02ee43e76f28ceadb9f9d7cf85df3015b Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:44:11 +0800
Subject: [PATCH 7/7] TAP: Perltidy PostgresNode.pm
Recent PostgresNode changes were committed without using the project's
perltidyrc. Tidy it now.
---
src/test/perl/PostgresNode.pm | 204 ++++++++++++++++++++++++------------------
src/test/perl/README | 3 +
src/test/perl/SimpleTee.pm | 13 +--
3 files changed, 130 insertions(+), 90 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 4eaa3ed..a965e71 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1,3 +1,4 @@
+
=pod
=head1 NAME
@@ -119,18 +120,15 @@ of finding port numbers, registering instances for cleanup, etc.
sub new
{
- my $class = shift;
- my $name = shift;
- my $pghost = shift;
- my $pgport = shift;
+ my ($class, $name, $pghost, $pgport) = @_;
my $testname = basename($0);
$testname =~ s/\.[^.]+$//;
- my $self = {
- _port => $pgport,
- _host => $pghost,
- _basedir => TestLib::tempdir($name),
- _name => $name,
- _logfile => "$TestLib::log_path/${testname}_${name}.log" };
+ my $self = {
+ _port => $pgport,
+ _host => $pghost,
+ _basedir => TestLib::tempdir($name),
+ _name => $name,
+ _logfile => "$TestLib::log_path/${testname}_${name}.log" };
bless $self, $class;
$self->dump_info;
@@ -380,7 +378,7 @@ sub init
$params{hba_permit_replication} = 1
unless defined $params{hba_permit_replication};
$params{allows_streaming} = 0 unless defined $params{allows_streaming};
- $params{has_archiving} = 0 unless defined $params{has_archiving};
+ $params{has_archiving} = 0 unless defined $params{has_archiving};
mkdir $self->backup_dir;
mkdir $self->archive_dir;
@@ -418,7 +416,7 @@ sub init
close $conf;
$self->set_replication_conf if $params{hba_permit_replication};
- $self->enable_archiving if $params{has_archiving};
+ $self->enable_archiving if $params{has_archiving};
}
=pod
@@ -510,26 +508,31 @@ sub _backup_fs
my $port = $self->port;
my $name = $self->name;
- print "# Taking filesystem level backup $backup_name from node \"$name\"\n";
+ print
+ "# Taking filesystem level backup $backup_name from node \"$name\"\n";
- if ($hot) {
- my $stdout = $self->psql_check('postgres', "SELECT * FROM pg_start_backup('$backup_name');");
+ if ($hot)
+ {
+ my $stdout = $self->psql_check('postgres',
+ "SELECT * FROM pg_start_backup('$backup_name');");
print "# pg_start_backup: $stdout\n";
}
- RecursiveCopy::copypath($self->data_dir, $backup_path,
- filterfn => sub {
- my $src = shift;
- return $src !~ /\/pg_log\// && $src !~ /\/postmaster.pid$/;
- }
- );
+ RecursiveCopy::copypath(
+ $self->data_dir,
+ $backup_path,
+ filterfn => sub {
+ my $src = shift;
+ return $src !~ /\/pg_log\// && $src !~ /\/postmaster.pid$/;
+ });
if ($hot)
{
# We ignore pg_stop_backup's return value. We also assume archiving
# is enabled; otherwise the caller will have to copy the remaining
# segments.
- my $stdout = $self->psql_check('postgres', 'SELECT * FROM pg_stop_backup();');
+ my $stdout =
+ $self->psql_check('postgres', 'SELECT * FROM pg_stop_backup();');
print "# pg_stop_backup: $stdout\n";
}
@@ -575,7 +578,7 @@ sub init_from_backup
$params{has_streaming} = 0 unless defined $params{has_streaming};
$params{hba_permit_replication} = 1
- unless defined $params{hba_permit_replication};
+ unless defined $params{hba_permit_replication};
$params{has_restoring} = 0 unless defined $params{has_restoring};
print
@@ -597,7 +600,7 @@ sub init_from_backup
qq(
port = $port
));
- $self->set_replication_conf if $params{hba_permit_replication};
+ $self->set_replication_conf if $params{hba_permit_replication};
$self->enable_streaming($root_node) if $params{has_streaming};
$self->enable_restoring($root_node) if $params{has_restoring};
}
@@ -690,19 +693,19 @@ sub promote
my $logfile = $self->logfile;
my $name = $self->name;
print "### Promoting node \"$name\"\n";
- TestLib::system_log('pg_ctl', '-D', $pgdata, '-l', $logfile,
- 'promote');
+ TestLib::system_log('pg_ctl', '-D', $pgdata, '-l', $logfile, 'promote');
}
# Internal routine to enable streaming replication on a standby node.
sub enable_streaming
{
- my ($self, $root_node) = @_;
+ my ($self, $root_node) = @_;
my $root_connstr = $root_node->connstr;
- my $name = $self->name;
+ my $name = $self->name;
print "### Enabling streaming replication for node \"$name\"\n";
- $self->append_conf('recovery.conf', qq(
+ $self->append_conf(
+ 'recovery.conf', qq(
primary_conninfo='$root_connstr application_name=$name'
standby_mode=on
));
@@ -711,7 +714,7 @@ standby_mode=on
# Internal routine to enable archive recovery command on a standby node
sub enable_restoring
{
- my ($self, $root_node) = @_;
+ my ($self, $root_node) = @_;
my $path = $root_node->archive_dir;
my $name = $self->name;
@@ -724,11 +727,13 @@ sub enable_restoring
# first. Paths also need to be double-quoted to prevent failures where
# the path contains spaces.
$path =~ s{\\}{\\\\}g if ($TestLib::windows_os);
- my $copy_command = $TestLib::windows_os ?
- qq{copy "$path\\\\%f" "%p"} :
- qq{cp $path/%f %p};
+ my $copy_command =
+ $TestLib::windows_os
+ ? qq{copy "$path\\\\%f" "%p"}
+ : qq{cp $path/%f %p};
- $self->append_conf('recovery.conf', qq(
+ $self->append_conf(
+ 'recovery.conf', qq(
restore_command = '$copy_command'
standby_mode = on
));
@@ -750,12 +755,14 @@ sub enable_archiving
# first. Paths also need to be double-quoted to prevent failures where
# the path contains spaces.
$path =~ s{\\}{\\\\}g if ($TestLib::windows_os);
- my $copy_command = $TestLib::windows_os ?
- qq{copy "%p" "$path\\\\%f"} :
- qq{cp %p $path/%f};
+ my $copy_command =
+ $TestLib::windows_os
+ ? qq{copy "%p" "$path\\\\%f"}
+ : qq{cp %p $path/%f};
# Enable archive_mode and archive_command on node
- $self->append_conf('postgresql.conf', qq(
+ $self->append_conf(
+ 'postgresql.conf', qq(
archive_mode = on
archive_command = '$copy_command'
));
@@ -886,8 +893,12 @@ sub psql
print("### Running SQL command on node \"$name\": $sql\n");
# Run the command, ignoring errors
- $self->psql_expert($dbname, $sql, stdout => \$stdout, stderr => \$stderr,
- die_on_error => 0, on_error_stop => 0);
+ $self->psql_expert(
+ $dbname, $sql,
+ stdout => \$stdout,
+ stderr => \$stderr,
+ die_on_error => 0,
+ on_error_stop => 0);
if ($stderr ne "")
{
@@ -914,10 +925,13 @@ sub psql_check
my ($stdout, $stderr);
- my $ret = $self->psql_expert($dbname, $sql,
- %params,
- stdout => \$stdout, stderr => \$stderr,
- on_error_die => 1, on_error_stop => 1);
+ my $ret = $self->psql_expert(
+ $dbname, $sql,
+ %params,
+ stdout => \$stdout,
+ stderr => \$stderr,
+ on_error_die => 1,
+ on_error_stop => 1);
# psql can emit stderr from NOTICEs etc
if ($stderr ne "")
@@ -994,20 +1008,23 @@ sub psql_expert
{
my ($self, $dbname, $sql, %params) = @_;
- my $stdout = $params{stdout};
- my $stderr = $params{stderr};
- my $timeout = undef;
+ my $stdout = $params{stdout};
+ my $stderr = $params{stderr};
+ my $timeout = undef;
my $timeout_exception = 'psql timed out';
- my @psql_params = ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
+ my @psql_params =
+ ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
$params{on_error_stop} = 1 unless defined $params{on_error_stop};
- $params{on_error_die} = 0 unless defined $params{on_error_die};
+ $params{on_error_die} = 0 unless defined $params{on_error_die};
push @psql_params, '-v', 'ON_ERROR_STOP=1' if $params{on_error_stop};
- push @psql_params, @{$params{extra_params}} if defined $params{extra_params};
+ push @psql_params, @{ $params{extra_params} }
+ if defined $params{extra_params};
- $timeout = IPC::Run::timeout( $params{timeout}, exception => $timeout_exception)
- if (defined($params{timeout}));
+ $timeout =
+ IPC::Run::timeout($params{timeout}, exception => $timeout_exception)
+ if (defined($params{timeout}));
# IPC::Run would otherwise append to existing contents:
$$stdout = "" if ref($stdout);
@@ -1015,37 +1032,48 @@ sub psql_expert
my $ret;
- # Perl's exception handling is ... interesting. Especially since we have to
- # support 5.8.8. So we hide that from the caller, returning true/false for
- # timeout instead. See
- # http://search.cpan.org/~ether/Try-Tiny-0.24/lib/Try/Tiny.pm for
- # background.
- my $error = do {
+ # Perl's exception handling is ... interesting. Especially since we have to
+ # support 5.8.8. So we hide that from the caller, returning true/false for
+ # timeout instead. See
+ # http://search.cpan.org/~ether/Try-Tiny-0.24/lib/Try/Tiny.pm for
+ # background.
+ my $error = do
+ {
local $@;
eval {
- IPC::Run::run \@psql_params, '<', \$sql, '>', $stdout, '2>', $stderr, $timeout;
+ IPC::Run::run \@psql_params, '<', \$sql, '>', $stdout, '2>',
+ $stderr, $timeout;
$ret = $?;
};
my $exc_save = $@;
- if ($exc_save) {
- # IPC::Run::run threw an exception. re-throw unless it's a
- # timeout, which we'll handle by testing is_expired
- if (blessed($exc_save) || $exc_save ne $timeout_exception) {
- print "Exception from IPC::Run::run when invoking psql: '$exc_save'\n";
- die $exc_save;
- } else {
- $ret = undef;
-
- die "Got timeout exception '$exc_save' but timer not expired?!"
+ if ($exc_save)
+ {
+ # IPC::Run::run threw an exception. re-throw unless it's a
+ # timeout, which we'll handle by testing is_expired
+ if (blessed($exc_save) || $exc_save ne $timeout_exception)
+ {
+ print
+"Exception from IPC::Run::run when invoking psql: '$exc_save'\n";
+ die $exc_save;
+ }
+ else
+ {
+ $ret = undef;
+
+ die
+ "Got timeout exception '$exc_save' but timer not expired?!"
unless $timeout->is_expired;
- if (defined($params{timed_out}))
- {
- ${$params{timed_out}} = 1;
- } else {
- die "psql timed out while running '@psql_params', stderr '$$stderr'";
- }
- }
+ if (defined($params{timed_out}))
+ {
+ ${ $params{timed_out} } = 1;
+ }
+ else
+ {
+ die
+"psql timed out while running '@psql_params', stderr '$$stderr'";
+ }
+ }
}
};
@@ -1060,20 +1088,26 @@ sub psql_expert
#
# We always die on signal. If someone wants to capture signals
# to psql we can return it with another reference out parameter.
- die "psql exited with signal " . ($ret & 127) . ": '$$stderr' while running '@psql_params'"
+ die "psql exited with signal "
+ . ($ret & 127)
+ . ": '$$stderr' while running '@psql_params'"
if $ret & 127;
die "psql exited with core dump: '$$stderr' while running '@psql_params'"
if $ret & 128;
$ret = $ret >> 8;
- if ($ret && $params{on_error_die}) {
- die "psql command line syntax error or internal error: '$$stderr' while running '@psql_params'"
- if $ret == 1;
- die "psql connection error: '$$stderr' while running '@psql_params'"
- if $ret == 2;
- die "error when running passed SQL: '$$stderr' while running '@psql_params'"
- if $ret == 3;
- die "unexpected error code $ret from psql: '$$stderr' while running '@psql_params'";
+ if ($ret && $params{on_error_die})
+ {
+ die
+"psql command line syntax error or internal error: '$$stderr' while running '@psql_params'"
+ if $ret == 1;
+ die "psql connection error: '$$stderr' while running '@psql_params'"
+ if $ret == 2;
+ die
+"error when running passed SQL: '$$stderr' while running '@psql_params'"
+ if $ret == 3;
+ die
+"unexpected error code $ret from psql: '$$stderr' while running '@psql_params'";
}
return $ret;
diff --git a/src/test/perl/README b/src/test/perl/README
index 7b6de5f..9eae159 100644
--- a/src/test/perl/README
+++ b/src/test/perl/README
@@ -11,6 +11,9 @@ isolation tester specs in src/test/isolation, if possible. If not, check to
see if your new tests make sense under an existing tree in src/test, like
src/test/ssl, or should be added to one of the suites for an existing utility.
+Note that all tests and test tools should have perltidy run on them before
+patches are submitted, using perltidy --profile=src/tools/pgindent/perltidyrc
+
Writing tests
-------------
diff --git a/src/test/perl/SimpleTee.pm b/src/test/perl/SimpleTee.pm
index 5da82d0..ea2f2ee 100644
--- a/src/test/perl/SimpleTee.pm
+++ b/src/test/perl/SimpleTee.pm
@@ -10,17 +10,20 @@
package SimpleTee;
use strict;
-sub TIEHANDLE {
+sub TIEHANDLE
+{
my $self = shift;
bless \@_, $self;
}
-sub PRINT {
+sub PRINT
+{
my $self = shift;
- my $ok = 1;
- for my $fh (@$self) {
+ my $ok = 1;
+ for my $fh (@$self)
+ {
print $fh @_ or $ok = 0;
- $fh->flush or $ok = 0;
+ $fh->flush or $ok = 0;
}
return $ok;
}
--
2.1.0
On Tue, Mar 1, 2016 at 2:48 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
Hi all
I've been working with the new TAP tests for recovery and have a number of
enhancements I'd like to make to the tooling to make writing tests easier
and nicer. I've also included two improvements proposed by Kyotaro
HORIGUCHI in the prior thread about the now-committed TAP recovery tests.I developed these changes as part of testing failover slots and logical
decoding timeline following, where I found a need for better control over
psql, the ability to make filesystem level backups, etc. It doesn't make
sense to try to jam all that into my test script when it belongs in the
infrastructure.Patches attached, each explains what it does and what for.
You are using both "die_on_error" and "on_error_die" in your code. That
looks like a mismatch!
On 1 March 2016 at 22:08, salvador fandino <sfandino@gmail.com> wrote:
On Tue, Mar 1, 2016 at 2:48 PM, Craig Ringer <craig@2ndquadrant.com>
wrote:Hi all
I've been working with the new TAP tests for recovery and have a number
of enhancements I'd like to make to the tooling to make writing tests
easier and nicer. I've also included two improvements proposed by Kyotaro
HORIGUCHI in the prior thread about the now-committed TAP recovery tests.I developed these changes as part of testing failover slots and logical
decoding timeline following, where I found a need for better control over
psql, the ability to make filesystem level backups, etc. It doesn't make
sense to try to jam all that into my test script when it belongs in the
infrastructure.Patches attached, each explains what it does and what for.
You are using both "die_on_error" and "on_error_die" in your code. That
looks like a mismatch!
Thanks for taking a look.
In this case it has no effect since it's just specifying the default
explicitly, but it's certainly wrong. Fixed in git.
I'll wait to see what else comes up before posting another series.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Craig Ringer wrote:
Hi all
I've been working with the new TAP tests for recovery and have a number of
enhancements I'd like to make to the tooling to make writing tests easier
and nicer.
I think we should change the existing psql method to be what you propose
as psql_expert. I don't see any advantage in keeping the old one. Many
of the existing uses of psql should become what you call psql_check; but
we should probably call that psql_ok() instead, and also have a
psql_fails() for src/test/recovery/t/001_stream_rep.pl (and others to
come).
+=item $node->backup_fs_hot(backup_name) + +Create a backup with a filesystem level copy in $node->backup_dir, +including transaction logs. Archiving must be enabled as pg_start_backup +and pg_stop_backup are used. This is not checked or enforced.
Perhaps "WAL archiving or streaming must be enabled ..."
I would rather have the patches be submitted with a reasonable
approximation at indentation rather than submit a separate indent patch.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Just pushed 0006.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer wrote:
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 3d11cbb..8c13655 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -112,9 +112,11 @@ INIT # sub tempdir { + my ($prefix) = @_; + $prefix = "tmp_test" if (!$prefix);
This should be "unless defined $prefix". Otherwise if you pass the
string literal "0" as prefix it will be ignored.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2 March 2016 at 07:07, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Craig Ringer wrote:
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 3d11cbb..8c13655 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -112,9 +112,11 @@ INIT # sub tempdir { + my ($prefix) = @_; + $prefix = "tmp_test" if (!$prefix);This should be "unless defined $prefix". Otherwise if you pass the
string literal "0" as prefix it will be ignored.
Ha. I thought something was funny with !$prefix when splitting that out of
Kyotaro Horiguchi's patch but couldn't put my finger on what.
Will amend in the next push. I'll keep the committed 006 Retain tempdirs
for failed tests in that series but amend it to show it's committed
upstream.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 2 March 2016 at 05:46, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
I think we should change the existing psql method to be what you propose
as psql_expert. I don't see any advantage in keeping the old one. Many
of the existing uses of psql should become what you call psql_check; but
we should probably call that psql_ok() instead, and also have a
psql_fails() for src/test/recovery/t/001_stream_rep.pl (and others to
come).
I agree and that's what I really wanted to do. I just didn't want to
produce a massive diff that renames the method across all of src/bin etc
too, since I thought that'd be harder to commit and might have backporting
consequences.
If you think that's the way to go I'm very happy with that and will proceed.
+=item $node->backup_fs_hot(backup_name) + +Create a backup with a filesystem level copy in $node->backup_dir, +including transaction logs. Archiving must be enabled as pg_start_backup +and pg_stop_backup are used. This is not checked or enforced.Perhaps "WAL archiving or streaming must be enabled ..."
Good point, will do.
I would rather have the patches be submitted with a reasonable
approximation at indentation rather than submit a separate indent patch.
The reason I didn't do that is that the indenting in PostgresNode.pm is
already well out of whack and, TBH, I didn't want to rebase on top of a
perltidy'd version. I can bite the bullet and move the perltidy to the
start of the patch series then make sure each subsequent patch is tidy'd
but I'd want to be very sure you'd be OK to commit the perltidy of
PostgresNode.pm otherwise I'd have to rebase messily all over again...
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
2016-03-02 6:57 GMT+08:00 Alvaro Herrera <alvherre@2ndquadrant.com>:
Just pushed 0006.
This upset buildfarm members running prehistoric Perl versions because
is_passing was added after 5.8.8.
Fix attached.
I think I'm going to have to do an archaeology-grade Perl install, there's
just too much to keep an eye on manually. Ugh.
Why do we requite a 10yo Perl anyway?
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 2 March 2016 at 11:22, Craig Ringer <craig@2ndquadrant.com> wrote:
2016-03-02 6:57 GMT+08:00 Alvaro Herrera <alvherre@2ndquadrant.com>:
Just pushed 0006.
This upset buildfarm members running prehistoric Perl versions because
is_passing was added after 5.8.8.Fix attached.
Really, this time.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-Fix-Perl-5.8.8-support.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-Fix-Perl-5.8.8-support.patchDownload
From bd913623092d92cc65d5b94b1b0c3fd5e66b8856 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Wed, 2 Mar 2016 11:17:46 +0800
Subject: [PATCH] TAP: Fix Perl 5.8.8 support
Commit 7132810c (Retain tempdirs for failed tests) used the is_passing
method present only after Perl 5.10 in Test::More 0.89_01. Work around
its absence.
---
src/test/perl/TestLib.pm | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 63a0e77..c069d43 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -110,7 +110,23 @@ INIT
END
{
# Preserve temporary directory for this test on failure
- $File::Temp::KEEP_ALL = 1 unless Test::More->builder->is_passing;
+ #
+ # We should be able to use Test::More->builder->is_passing here, but
+ # we require Perl 5.8.8 support so we have to work around it.
+ #
+ $File::Temp::KEEP_ALL = 1 unless all_tests_passing;
+}
+
+# Workaround for is_passing being missing prior to Test::More 0.89_01
+# Credit http://stackoverflow.com/a/1506700/398670
+sub all_tests_passing
+{
+ my $FAILcount = 0;
+ foreach my $detail (Test::Builder->details())
+ {
+ if (${%$detail}{ok} == 0) { $FAILcount++; }
+ }
+ return !$FAILcount;
}
#
--
2.1.0
On 2 March 2016 at 11:23, Craig Ringer <craig@2ndquadrant.com> wrote:
Really, this time.
Really, really this time, the version in git that actually works, not a
format-patch'd version before I made a last fix. Sigh. I can't even blame
lack of coffee...
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-Fix-Perl-5.8.8-support.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-Fix-Perl-5.8.8-support.patchDownload
From a6aa0139cc8fea2fb93b7c3087d8878d8923c49a Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Wed, 2 Mar 2016 11:17:46 +0800
Subject: [PATCH] TAP: Fix Perl 5.8.8 support
Commit 7132810c (Retain tempdirs for failed tests) used the is_passing
method present only after Perl 5.10 in Test::More 0.89_01. Work around
its absence.
---
src/test/perl/TestLib.pm | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 63a0e77..1900922 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -110,7 +110,23 @@ INIT
END
{
# Preserve temporary directory for this test on failure
- $File::Temp::KEEP_ALL = 1 unless Test::More->builder->is_passing;
+ #
+ # We should be able to use Test::More->builder->is_passing here, but
+ # we require Perl 5.8.8 support so we have to work around it.
+ #
+ $File::Temp::KEEP_ALL = 1 unless all_tests_passing();
+}
+
+# Workaround for is_passing being missing prior to Test::More 0.89_01
+# Credit http://stackoverflow.com/a/1506700/398670
+sub all_tests_passing
+{
+ my $FAILcount = 0;
+ foreach my $detail (Test::Builder->details())
+ {
+ if (${%$detail}{ok} == 0) { $FAILcount++; }
+ }
+ return !$FAILcount;
}
#
--
2.1.0
On 2 March 2016 at 10:07, Craig Ringer <craig@2ndquadrant.com> wrote:
On 2 March 2016 at 05:46, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
I think we should change the existing psql method to be what you propose
as psql_expert. I don't see any advantage in keeping the old one. Many
of the existing uses of psql should become what you call psql_check; but
we should probably call that psql_ok() instead, and also have a
psql_fails() for src/test/recovery/t/001_stream_rep.pl (and others to
come).I agree and that's what I really wanted to do. I just didn't want to
produce a massive diff that renames the method across all of src/bin etc
too, since I thought that'd be harder to commit and might have backporting
consequences.If you think that's the way to go I'm very happy with that and will
proceed.
I'll make the change you suggested to make 'psql_expert' into 'psql' and
change call sites to use it or psql_check as appropriate. I'll probably
make it an immediately following patch in the series so it's easier to
separate the bulk-rename from the functional changes, but it can be
trivially squashed for commit.
On reflection I want to keep the name as psql_check, rather than psql_ok.
I'll insert another patch that changes src/bin to use psql_check where
appropriate.
The reason I used _check rather than psql_ok is partly that psql_check
isn't a test. It doesn't run any Test::More checks, it die()s on failure
because failure isn't expected but is incidental to the test that's using
psql. I did it that way because I don't think the psql invocation should be
a test in its self - then you'd have to pass a test name to every psql_ok
invocation and you'd get a flood of meaningless micro-tests showing up that
obscure the real thing being tested. It'd also be a PITA maintaining the
number of tests in the tests => 'n' argument to Test::More.
So I'm inclined to keep it as psql_check, to avoid confusion with the names
'ok' and 'fails' that Test::More uses. It's not actually a test. I don't
think we need or should have a psql_ok wrapper, since with this change you
can now just write:
is($node->psql('db', 'SELECT syntaxerror;'), 3, 'psql exits with code 3 on
syntax error');
which is clearer and more specific than:
$node->psql_ok('db', 'SELECT syntaxerror;', test => 'psql exits on syntax
error');
The reason I didn't do that is that the indenting in PostgresNode.pm is
already well out of whack and, TBH, I didn't want to rebase on top of a
perltidy'd version. I can bite the bullet and move the perltidy to the
start of the patch series then make sure each subsequent patch is tidy'd
but I'd want to be very sure you'd be OK to commit the perltidy of
PostgresNode.pm otherwise I'd have to rebase messily all over again...
This wasn't as bad as I thought. I pulled the tidy changes to the
$self->psql stuff into that patch and rebased the rest to the start of the
series so it only touches what's currently committed. I agree that's better.
Updated tree pushed. I'll send a new patch series once I've done the
psql_ok part.
It's funny that as part of implementing timeline following in logical
decoding and implementing failover slots I'm writing perl test framework
improvements....
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Craig Ringer <craig@2ndquadrant.com> writes:
This upset buildfarm members running prehistoric Perl versions because
is_passing was added after 5.8.8.
Sir, RHEL6 is not prehistoric ... and this is failing on my server too.
I'm not sure when "is_passing" was added, but it was later than 5.10.1.
Fix attached.
Will check and apply.
I think I'm going to have to do an archaeology-grade Perl install, there's
just too much to keep an eye on manually. Ugh.
Why do we requite a 10yo Perl anyway?
The point of running ancient versions in the buildfarm is so that we
don't move the goalposts on software compatibility without knowing it.
When we come across a good reason to desupport an old Perl or Tcl or
whatever version, we'll do it; but having to add another ten lines or
so of code doesn't seem like a good reason.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Mar 2, 2016 at 1:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Craig Ringer <craig@2ndquadrant.com> writes:
This upset buildfarm members running prehistoric Perl versions because
is_passing was added after 5.8.8.Sir, RHEL6 is not prehistoric ... and this is failing on my server too.
I'm not sure when "is_passing" was added, but it was later than 5.10.1.
Test::Builder is managing this variable, as documented here for people
wondering:
http://perldoc.perl.org/Test/Builder.html
I am tracking the first version as being 5.12.0.
{details} would prove to work, it is available in 5.8.8.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer <craig@2ndquadrant.com> writes:
Really, really this time, the version in git that actually works, not a
format-patch'd version before I made a last fix. Sigh. I can't even blame
lack of coffee...
Hmm, still doesn't work for me: make check-world dies with
Can't use string ("Test::Builder") as a HASH ref while "strict refs" in use at /
usr/share/perl5/Test/Builder.pm line 1798.
END failed--call queue aborted.
# Looks like your test exited with 22 just after 14.
The referenced line number is the end of the file, which is right in
keeping with the TAP tests' infrastructure's habit of being utterly
impenetrable when it's not working. My small amount of Perl-fu is
not sufficient to debug this.
perl-5.10.1-141.el6_7.1.x86_64
perl-Test-Harness-3.17-141.el6_7.1.x86_64
perl-Test-Simple-0.92-141.el6_7.1.x86_64
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I wrote:
Can't use string ("Test::Builder") as a HASH ref while "strict refs" in use at /usr/share/perl5/Test/Builder.pm line 1798.
The referenced line number is the end of the file,
Oh, scratch that; I was looking at the wrong file. Actually,
/usr/share/perl5/Test/Builder.pm has
sub details {
my $self = shift;
return @{ $self->{Test_Results} };
}
and line 1798 is the "return" statement in that. I still lack enough
Perl-fu to decipher this, though.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane wrote:
I wrote:
Can't use string ("Test::Builder") as a HASH ref while "strict refs" in use at /usr/share/perl5/Test/Builder.pm line 1798.
The referenced line number is the end of the file,
Oh, scratch that; I was looking at the wrong file. Actually,
/usr/share/perl5/Test/Builder.pm hassub details {
my $self = shift;
return @{ $self->{Test_Results} };
}and line 1798 is the "return" statement in that. I still lack enough
Perl-fu to decipher this, though.
I think it's complaining about being called as a class method. Perhaps
this would work:
+ foreach my $detail (Test::More->builder->details())
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Mar 2, 2016 at 2:18 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Tom Lane wrote:
I wrote:
Can't use string ("Test::Builder") as a HASH ref while "strict refs" in use at /usr/share/perl5/Test/Builder.pm line 1798.
The referenced line number is the end of the file,
Oh, scratch that; I was looking at the wrong file. Actually,
/usr/share/perl5/Test/Builder.pm hassub details {
my $self = shift;
return @{ $self->{Test_Results} };
}and line 1798 is the "return" statement in that. I still lack enough
Perl-fu to decipher this, though.I think it's complaining about being called as a class method. Perhaps
this would work:+ foreach my $detail (Test::More->builder->details())
Yes, that's the problem. Instead of using details(), summary() is
enough actually. And it is enough to let caller know the failure when
just one test has been found as not passing. See attached.
--
Michael
Attachments:
tap-tmp-failure.patchbinary/octet-stream; name=tap-tmp-failure.patchDownload
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 4fb43fe..564936e 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -110,7 +110,17 @@ INIT
END
{
# Preserve temporary directory for this test on failure
- $File::Temp::KEEP_ALL = 1 unless Test::More->builder->is_passing;
+ $File::Temp::KEEP_ALL = 1 unless all_tests_passing();
+}
+
+sub all_tests_passing
+{
+ my $fail_count = 0;
+ foreach my $status (Test::More->builder->summary)
+ {
+ return 0 unless $status;
+ }
+ return 1;
}
#
On 2 March 2016 at 13:22, Michael Paquier <michael.paquier@gmail.com> wrote:
On Wed, Mar 2, 2016 at 2:18 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:Tom Lane wrote:
I wrote:
Can't use string ("Test::Builder") as a HASH ref while "strict refs"
in use at /usr/share/perl5/Test/Builder.pm line 1798.
The referenced line number is the end of the file,
Oh, scratch that; I was looking at the wrong file. Actually,
/usr/share/perl5/Test/Builder.pm hassub details {
my $self = shift;
return @{ $self->{Test_Results} };
}and line 1798 is the "return" statement in that. I still lack enough
Perl-fu to decipher this, though.I think it's complaining about being called as a class method. Perhaps
this would work:+ foreach my $detail (Test::More->builder->details())
Yes, that's the problem. Instead of using details(), summary() is
enough actually. And it is enough to let caller know the failure when
just one test has been found as not passing. See attached.
Thanks.
I'm setting up a container with Debian Wheezy, which looks old enough to
test proposed framework changes, so I'll be able to test future patches
against a real Perl 5.8.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Michael Paquier <michael.paquier@gmail.com> writes:
Yes, that's the problem. Instead of using details(), summary() is
enough actually. And it is enough to let caller know the failure when
just one test has been found as not passing. See attached.
This one works for me on RHEL6. Pushed; we'll see if the older
buildfarm members like it.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I wrote:
Michael Paquier <michael.paquier@gmail.com> writes:
Yes, that's the problem. Instead of using details(), summary() is
enough actually. And it is enough to let caller know the failure when
just one test has been found as not passing. See attached.
This one works for me on RHEL6. Pushed; we'll see if the older
buildfarm members like it.
I don't normally run the TAP tests on "prairiedog", because it's
too $!*&@ slow, but trying that manually seems to work. That's
the Perl 5.8.6 that Apple shipped with OS X 10.4 ... if there's
anything older in our buildfarm, I don't know about it.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer wrote:
On 2 March 2016 at 07:07, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Craig Ringer wrote:
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index 3d11cbb..8c13655 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -112,9 +112,11 @@ INIT # sub tempdir { + my ($prefix) = @_; + $prefix = "tmp_test" if (!$prefix);This should be "unless defined $prefix". Otherwise if you pass the
string literal "0" as prefix it will be ignored.Ha. I thought something was funny with !$prefix when splitting that out of
Kyotaro Horiguchi's patch but couldn't put my finger on what.Will amend in the next push.
Pushed it with that fix. I also added a further "data_" prefix, so it's
"data_${name}_XXXX" now. Hopefully this is less problematic than
yesterday's ...
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3 March 2016 at 05:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Pushed it with that fix. I also added a further "data_" prefix, so it's
"data_${name}_XXXX" now. Hopefully this is less problematic than
yesterday's ...
On the Perl 5.8.8 test env I've set up now, per
/messages/by-id/CAMsr+YGR6pU-gUyp-FT98XwXAsc9n6j-awZAqxvW_+P3RTC7cg@mail.gmail.com
master currently fails with
t/004_timeline_switch....."remove_tree" is not exported by the File::Path
module
remove_tree is only in File::Path 2.07 but Perl 5.8.8 has 1.08. No
buildfarm member has complained, so clearly we don't actually test with the
stated supported Perl version.
The attached patch fixes it to use the legacy File::Path interface 'rmtree'
so master runs on 588 again.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-File-Path-remove_tree-is-new-in-2.x-use-1.x-rmtr.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-File-Path-remove_tree-is-new-in-2.x-use-1.x-rmtr.patchDownload
From a703cd78cf834df794253bb1cfd897daa96d244a Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Thu, 3 Mar 2016 13:18:23 +0800
Subject: [PATCH] TAP: File::Path::remove_tree is new in 2.x, use 1.x rmtree
instead
Preserve Perl 5.8.8 support by using the legacy rmtree call
instead of remove_tree from File::Path 2.x.
---
src/test/recovery/t/004_timeline_switch.pl | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index 58bf580..a180b94 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -3,7 +3,7 @@
# on a new timeline.
use strict;
use warnings;
-use File::Path qw(remove_tree);
+use File::Path;
use PostgresNode;
use TestLib;
use Test::More tests => 1;
@@ -46,7 +46,7 @@ $node_master->teardown_node;
$node_standby_1->promote;
# Switch standby 2 to replay from standby 1
-remove_tree($node_standby_2->data_dir . '/recovery.conf');
+File::Path::rmtree($node_standby_2->data_dir . '/recovery.conf');
my $connstr_1 = $node_standby_1->connstr;
$node_standby_2->append_conf(
'recovery.conf', qq(
--
2.1.0
On Thu, Mar 3, 2016 at 2:20 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
On the Perl 5.8.8 test env I've set up now, per
/messages/by-id/CAMsr+YGR6pU-gUyp-FT98XwXAsc9n6j-awZAqxvW_+P3RTC7cg@mail.gmail.com
master currently fails with
t/004_timeline_switch....."remove_tree" is not exported by the File::Path
moduleremove_tree is only in File::Path 2.07 but Perl 5.8.8 has 1.08. No buildfarm
member has complained, so clearly we don't actually test with the stated
supported Perl version.The attached patch fixes it to use the legacy File::Path interface 'rmtree'
so master runs on 588 again.
Crap. Thanks for spotting that, I was careless. You could still use
qw() and not use File::Path::rmtree, but that's a matter of taste.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3 March 2016 at 13:23, Michael Paquier <michael.paquier@gmail.com> wrote:
On Thu, Mar 3, 2016 at 2:20 PM, Craig Ringer <craig@2ndquadrant.com>
wrote:On the Perl 5.8.8 test env I've set up now, per
/messages/by-id/CAMsr+YGR6pU-gUyp-FT98XwXAsc9n6j-awZAqxvW_+P3RTC7cg@mail.gmail.com
master currently fails with
t/004_timeline_switch....."remove_tree" is not exported by the File::Path
moduleremove_tree is only in File::Path 2.07 but Perl 5.8.8 has 1.08. No
buildfarm
member has complained, so clearly we don't actually test with the stated
supported Perl version.The attached patch fixes it to use the legacy File::Path interface
'rmtree'
so master runs on 588 again.
Crap. Thanks for spotting that, I was careless. You could still use
qw() and not use File::Path::rmtree, but that's a matter of taste.
New patch series.
The first three are simple fixes that should go in without fuss:
001 fixes the above 5.8.8 compat issue.
002 fixes another minor whoopsie, a syntax error in src/test/recovery/t/
003_recovery_targets.pl that never got noticed because exit codes are
ignored.
003 runs perltidy on PostgresNode.pm to bring it back into conformance
after the recovery tests commit.
The rest are feature patches:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.
005 adds the new psql functions psql_expert and psql_check. Then 006
renames psql_expert to psql and fixes up all call sites across the tree to
use the new interface. I found the bug in 002 as part of that process. I
anticipate that 005 and 006 would be squashed into one commit to master,
but I've kept them separate in my tree for easier review.
007 adds PostgresNode support for hot and cold filesystem-level backups
using pg_start_backup and pg_stop_backup, which will be required for some
coming tests and are useful by themselves.
Each patch is pretty simple except for the psql patch, and even that's not
exactly hairy stuff. The potential risks are low as this is code that's not
part of the server and isn't even run on 'make check' by default. I've
tested it all under a real Perl 5.8.8 on CentOS 5.
I don't expect to be doing much more on the framework at this point as I
want to be able to get back to the code I had to enhance the framework in
order to test....
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-File-Path-remove_tree-is-new-in-2.x-use-1.x-rmtr.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-File-Path-remove_tree-is-new-in-2.x-use-1.x-rmtr.patchDownload
From 7f5cf0ac7368790e4597642b59560fa1c16597d8 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Thu, 3 Mar 2016 13:18:23 +0800
Subject: [PATCH 1/7] TAP: File::Path::remove_tree is new in 2.x, use 1.x
rmtree instead
Preserve Perl 5.8.8 support by using the legacy rmtree call
instead of remove_tree from File::Path 2.x.
---
src/test/recovery/t/004_timeline_switch.pl | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index 58bf580..a180b94 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -3,7 +3,7 @@
# on a new timeline.
use strict;
use warnings;
-use File::Path qw(remove_tree);
+use File::Path;
use PostgresNode;
use TestLib;
use Test::More tests => 1;
@@ -46,7 +46,7 @@ $node_master->teardown_node;
$node_standby_1->promote;
# Switch standby 2 to replay from standby 1
-remove_tree($node_standby_2->data_dir . '/recovery.conf');
+File::Path::rmtree($node_standby_2->data_dir . '/recovery.conf');
my $connstr_1 = $node_standby_1->connstr;
$node_standby_2->append_conf(
'recovery.conf', qq(
--
2.1.0
0002-TAP-Fix-syntax-error-in-recovery-tests.patchtext/x-patch; charset=US-ASCII; name=0002-TAP-Fix-syntax-error-in-recovery-tests.patchDownload
From c346e15f613931d0b8d65025076a998234dbe26e Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Thu, 3 Mar 2016 14:07:32 +0800
Subject: [PATCH 2/7] TAP: Fix syntax error in recovery tests
pg_create_restore_point was never running successfully in the
TAP tests for recovery because of a typo.
---
src/test/recovery/t/003_recovery_targets.pl | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index 8b581cc..a0d8b45 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -83,8 +83,8 @@ $node_master->psql('postgres',
my $recovery_name = "my_target";
my $lsn4 =
$node_master->psql('postgres', "SELECT pg_current_xlog_location();");
-$node_master->psql('postgres',
- "SELECT pg_create_restore_point('$recovery_name'");
+$node_master->psql_check('postgres',
+ "SELECT pg_create_restore_point('$recovery_name');");
# Force archiving of WAL file
$node_master->psql('postgres', "SELECT pg_switch_xlog()");
--
2.1.0
0003-TAP-Perltidy-PostgresNode.pm.patchtext/x-patch; charset=US-ASCII; name=0003-TAP-Perltidy-PostgresNode.pm.patchDownload
From ca0c28be54329473ef0bed07358b519835c5b447 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:44:11 +0800
Subject: [PATCH 3/7] TAP: Perltidy PostgresNode.pm
Recent PostgresNode changes were committed without using the project's
perltidyrc. Tidy it now.
---
src/test/perl/PostgresNode.pm | 58 ++++++++++++++++++++++---------------------
src/test/perl/README | 3 +++
src/test/perl/SimpleTee.pm | 13 ++++++----
3 files changed, 41 insertions(+), 33 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 4ca7f77..7c8e66e 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1,3 +1,4 @@
+
=pod
=head1 NAME
@@ -106,18 +107,15 @@ of finding port numbers, registering instances for cleanup, etc.
sub new
{
- my $class = shift;
- my $name = shift;
- my $pghost = shift;
- my $pgport = shift;
+ my ($class, $name, $pghost, $pgport) = @_;
my $testname = basename($0);
$testname =~ s/\.[^.]+$//;
- my $self = {
- _port => $pgport,
- _host => $pghost,
- _basedir => TestLib::tempdir("data_" . $name),
- _name => $name,
- _logfile => "$TestLib::log_path/${testname}_${name}.log" };
+ my $self = {
+ _port => $pgport,
+ _host => $pghost,
+ _basedir => TestLib::tempdir("data_" . $name),
+ _name => $name,
+ _logfile => "$TestLib::log_path/${testname}_${name}.log" };
bless $self, $class;
$self->dump_info;
@@ -367,7 +365,7 @@ sub init
$params{hba_permit_replication} = 1
unless defined $params{hba_permit_replication};
$params{allows_streaming} = 0 unless defined $params{allows_streaming};
- $params{has_archiving} = 0 unless defined $params{has_archiving};
+ $params{has_archiving} = 0 unless defined $params{has_archiving};
mkdir $self->backup_dir;
mkdir $self->archive_dir;
@@ -405,7 +403,7 @@ sub init
close $conf;
$self->set_replication_conf if $params{hba_permit_replication};
- $self->enable_archiving if $params{has_archiving};
+ $self->enable_archiving if $params{has_archiving};
}
=pod
@@ -492,7 +490,7 @@ sub init_from_backup
$params{has_streaming} = 0 unless defined $params{has_streaming};
$params{hba_permit_replication} = 1
- unless defined $params{hba_permit_replication};
+ unless defined $params{hba_permit_replication};
$params{has_restoring} = 0 unless defined $params{has_restoring};
print
@@ -514,7 +512,7 @@ sub init_from_backup
qq(
port = $port
));
- $self->set_replication_conf if $params{hba_permit_replication};
+ $self->set_replication_conf if $params{hba_permit_replication};
$self->enable_streaming($root_node) if $params{has_streaming};
$self->enable_restoring($root_node) if $params{has_restoring};
}
@@ -607,19 +605,19 @@ sub promote
my $logfile = $self->logfile;
my $name = $self->name;
print "### Promoting node \"$name\"\n";
- TestLib::system_log('pg_ctl', '-D', $pgdata, '-l', $logfile,
- 'promote');
+ TestLib::system_log('pg_ctl', '-D', $pgdata, '-l', $logfile, 'promote');
}
# Internal routine to enable streaming replication on a standby node.
sub enable_streaming
{
- my ($self, $root_node) = @_;
+ my ($self, $root_node) = @_;
my $root_connstr = $root_node->connstr;
- my $name = $self->name;
+ my $name = $self->name;
print "### Enabling streaming replication for node \"$name\"\n";
- $self->append_conf('recovery.conf', qq(
+ $self->append_conf(
+ 'recovery.conf', qq(
primary_conninfo='$root_connstr application_name=$name'
standby_mode=on
));
@@ -628,7 +626,7 @@ standby_mode=on
# Internal routine to enable archive recovery command on a standby node
sub enable_restoring
{
- my ($self, $root_node) = @_;
+ my ($self, $root_node) = @_;
my $path = $root_node->archive_dir;
my $name = $self->name;
@@ -641,11 +639,13 @@ sub enable_restoring
# first. Paths also need to be double-quoted to prevent failures where
# the path contains spaces.
$path =~ s{\\}{\\\\}g if ($TestLib::windows_os);
- my $copy_command = $TestLib::windows_os ?
- qq{copy "$path\\\\%f" "%p"} :
- qq{cp $path/%f %p};
+ my $copy_command =
+ $TestLib::windows_os
+ ? qq{copy "$path\\\\%f" "%p"}
+ : qq{cp $path/%f %p};
- $self->append_conf('recovery.conf', qq(
+ $self->append_conf(
+ 'recovery.conf', qq(
restore_command = '$copy_command'
standby_mode = on
));
@@ -667,12 +667,14 @@ sub enable_archiving
# first. Paths also need to be double-quoted to prevent failures where
# the path contains spaces.
$path =~ s{\\}{\\\\}g if ($TestLib::windows_os);
- my $copy_command = $TestLib::windows_os ?
- qq{copy "%p" "$path\\\\%f"} :
- qq{cp %p $path/%f};
+ my $copy_command =
+ $TestLib::windows_os
+ ? qq{copy "%p" "$path\\\\%f"}
+ : qq{cp %p $path/%f};
# Enable archive_mode and archive_command on node
- $self->append_conf('postgresql.conf', qq(
+ $self->append_conf(
+ 'postgresql.conf', qq(
archive_mode = on
archive_command = '$copy_command'
));
diff --git a/src/test/perl/README b/src/test/perl/README
index 7b6de5f..9eae159 100644
--- a/src/test/perl/README
+++ b/src/test/perl/README
@@ -11,6 +11,9 @@ isolation tester specs in src/test/isolation, if possible. If not, check to
see if your new tests make sense under an existing tree in src/test, like
src/test/ssl, or should be added to one of the suites for an existing utility.
+Note that all tests and test tools should have perltidy run on them before
+patches are submitted, using perltidy --profile=src/tools/pgindent/perltidyrc
+
Writing tests
-------------
diff --git a/src/test/perl/SimpleTee.pm b/src/test/perl/SimpleTee.pm
index 5da82d0..ea2f2ee 100644
--- a/src/test/perl/SimpleTee.pm
+++ b/src/test/perl/SimpleTee.pm
@@ -10,17 +10,20 @@
package SimpleTee;
use strict;
-sub TIEHANDLE {
+sub TIEHANDLE
+{
my $self = shift;
bless \@_, $self;
}
-sub PRINT {
+sub PRINT
+{
my $self = shift;
- my $ok = 1;
- for my $fh (@$self) {
+ my $ok = 1;
+ for my $fh (@$self)
+ {
print $fh @_ or $ok = 0;
- $fh->flush or $ok = 0;
+ $fh->flush or $ok = 0;
}
return $ok;
}
--
2.1.0
0004-TAP-Add-filtering-to-RecursiveCopy.patchtext/x-patch; charset=US-ASCII; name=0004-TAP-Add-filtering-to-RecursiveCopy.patchDownload
From 548469e609cc14d69c49908035950bdc37a74d3c Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:06:47 +0800
Subject: [PATCH 4/7] TAP: Add filtering to RecursiveCopy
Allow RecursiveCopy to accept a filter function so callers can
exclude unwanted files. Also POD-ify it.
---
src/test/perl/RecursiveCopy.pm | 76 ++++++++++++++++++++++++++++++++++++++----
1 file changed, 70 insertions(+), 6 deletions(-)
diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm
index 9362aa8..97f84e5 100644
--- a/src/test/perl/RecursiveCopy.pm
+++ b/src/test/perl/RecursiveCopy.pm
@@ -1,4 +1,18 @@
-# RecursiveCopy, a simple recursive copy implementation
+
+=pod
+
+=head1 NAME
+
+RecursiveCopy - simple recursive copy implementation
+
+=head1 SYNOPSIS
+
+use RecursiveCopy;
+
+RecursiveCopy::copypath($from, $to);
+
+=cut
+
package RecursiveCopy;
use strict;
@@ -7,10 +21,56 @@ use warnings;
use File::Basename;
use File::Copy;
+=pod
+
+=head2 copypath($from, $to)
+
+Copy all files and directories from $from to $to. Raises an exception
+if a file would be overwritten, the source dir can't be read, or any
+I/O operation fails. Always returns true. On failure the copy may be
+in some incomplete state; no cleanup is attempted.
+
+If the keyword param 'filterfn' is defined it's invoked as a sub that
+returns true if the file/directory should be copied, false otherwise.
+The passed path is the full path to the file relative to the source
+directory.
+
+e.g.
+
+RecursiveCopy::copypath('/some/path', '/empty/dir',
+ filterfn => sub {^
+ # omit children of pg_log
+ my $src = shift;
+ return ! $src ~= /\/pg_log\//
+ }
+);
+
+=cut
+
sub copypath
{
- my $srcpath = shift;
- my $destpath = shift;
+ my ($srcpath, $destpath, %params) = @_;
+
+ die("if specified, 'filterfn' must be a sub ref")
+ if defined $params{filterfn} && !ref $params{filterfn};
+
+ my $filterfn;
+ if (defined $params{filterfn})
+ {
+ $filterfn = $params{filterfn};
+ }
+ else
+ {
+ $filterfn = sub { return 1; };
+ }
+
+ return _copypath_recurse($srcpath, $destpath, $filterfn);
+}
+
+# Recursive private guts of copypath
+sub _copypath_recurse
+{
+ my ($srcpath, $destpath, $filterfn) = @_;
die "Cannot operate on symlinks" if -l $srcpath or -l $destpath;
@@ -19,8 +79,11 @@ sub copypath
die "Destination path $destpath exists as file" if -f $destpath;
if (-f $srcpath)
{
- copy($srcpath, $destpath)
- or die "copy $srcpath -> $destpath failed: $!";
+ if ($filterfn->($srcpath))
+ {
+ copy($srcpath, $destpath)
+ or die "copy $srcpath -> $destpath failed: $!";
+ }
return 1;
}
@@ -32,7 +95,8 @@ sub copypath
while (my $entry = readdir($directory))
{
next if ($entry eq '.' || $entry eq '..');
- RecursiveCopy::copypath("$srcpath/$entry", "$destpath/$entry")
+ RecursiveCopy::_copypath_recurse("$srcpath/$entry",
+ "$destpath/$entry", $filterfn)
or die "copypath $srcpath/$entry -> $destpath/$entry failed";
}
closedir($directory);
--
2.1.0
0005-TAP-Add-easier-more-flexible-ways-to-invoke-psql.patchtext/x-patch; charset=US-ASCII; name=0005-TAP-Add-easier-more-flexible-ways-to-invoke-psql.patchDownload
From bf7e72c64f5f1b727ae5f3463ffb544959e57259 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:08:21 +0800
Subject: [PATCH 5/7] TAP: Add easier, more flexible ways to invoke psql
The PostgresNode::psql method is limited - it offers no access
to the return code from psql, ignores SQL errors, and offers
no access to psql's stderr.
Provide a new psql_expert that addresses those limitations and
can be used more flexibly - see the embedded PerlDoc for details.
Also add a new psql_check method that invokes psql and dies if
the SQL fails with any error. Test scripts should use this so
they automatically die if SQL that should succeed fails instead;
with the current psql method such failures would go undetected.
---
src/test/perl/PostgresNode.pm | 289 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 277 insertions(+), 12 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 7c8e66e..115b9e3 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -22,8 +22,20 @@ PostgresNode - class representing PostgreSQL server instance
$node->restart('fast');
# run a query with psql
- # like: psql -qAXt postgres -c 'SELECT 1;'
- $psql_stdout = $node->psql('postgres', 'SELECT 1');
+ # like:
+ # echo 'SELECT 1' | psql -qAXt postgres -v ON_ERROR_STOP=1
+ $psql_stdout = $node->psql_check('postgres', 'SELECT 1');
+
+ # Run psql with a timeout, capturing stdout and stderr
+ # as well as the psql exit code. Pass some extra psql
+ # options. If there's an error from psql raise an exception.
+ my ($stdout, $stderr, $timed_out);
+ my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'],
+ on_error_die => 1)
+ print "Sleep timed out" if $timed_out;
# run query every second until it returns 't'
# or times out
@@ -70,6 +82,7 @@ use IPC::Run;
use RecursiveCopy;
use Test::More;
use TestLib ();
+use Scalar::Util qw(blessed);
our @EXPORT = qw(
get_new_node
@@ -782,11 +795,16 @@ sub teardown_node
=item $node->psql(dbname, sql)
-Run a query with psql and return stdout, or on error print stderr.
+Run a query with psql and return stdout if psql returns with no error.
-Executes a query/script with psql and returns psql's standard output. psql is
-run in unaligned tuples-only quiet mode with psqlrc disabled so simple queries
-will just return the result row(s) with fields separated by commas.
+psql is run in unaligned tuples-only quiet mode with psqlrc disabled so simple
+queries will just return the result row(s) with fields separated by commas.
+
+If any stderr output is generated it is printed and discarded.
+
+Nonzero return codes from psql are ignored and discarded.
+
+Use psql_expert for more control.
=cut
@@ -795,24 +813,271 @@ sub psql
my ($self, $dbname, $sql) = @_;
my ($stdout, $stderr);
+
my $name = $self->name;
print("### Running SQL command on node \"$name\": $sql\n");
- IPC::Run::run [ 'psql', '-XAtq', '-d', $self->connstr($dbname), '-f',
- '-' ], '<', \$sql, '>', \$stdout, '2>', \$stderr
- or die;
+ # Run the command, ignoring errors
+ $self->psql_expert(
+ $dbname, $sql,
+ stdout => \$stdout,
+ stderr => \$stderr,
+ on_error_die => 0,
+ on_error_stop => 0);
if ($stderr ne "")
{
print "#### Begin standard error\n";
print $stderr;
- print "#### End standard error\n";
+ print "\n#### End standard error\n";
}
- chomp $stdout;
- $stdout =~ s/\r//g if $Config{osname} eq 'msys';
return $stdout;
}
+=pod $node->psql_check($dbname, $sql) => stdout
+
+Invoke 'psql' to run 'sql' on 'dbname' and return its stdout on success.
+Die if the SQL produces an error. Runs with ON_ERROR_STOP set.
+
+Takes optional extra params like timeout and timed_out parameters with the same
+options as psql_expert.
+
+=cut
+
+sub psql_check
+{
+ my ($self, $dbname, $sql, %params) = @_;
+
+ my ($stdout, $stderr);
+
+ my $ret = $self->psql_expert(
+ $dbname, $sql,
+ %params,
+ stdout => \$stdout,
+ stderr => \$stderr,
+ on_error_die => 1,
+ on_error_stop => 1);
+
+ # psql can emit stderr from NOTICEs etc
+ if ($stderr ne "")
+ {
+ print "#### Begin standard error\n";
+ print $stderr;
+ print "\n#### End standard error\n";
+ }
+
+ return $stdout;
+}
+
+=pod $node->psql_expert($dbname, $sql, %params) => psql_retval
+
+Invoke 'psql' to run 'sql' on 'dbname' and return the return value from
+psql, which is run with on_error_stop by default so that it will stop running
+sql and return 3 if the passed SQL results in an error.
+
+psql is invoked in tuples-only unaligned mode with reading of psqlrc disabled. That
+may be overridden by passing extra psql parameters.
+
+stdout and stderr are transformed to unix line endings if on Windows and any
+trailing newline is removed.
+
+Dies on failure to invoke psql but not if psql exits with a nonzero return code
+(unless on_error_die specified). Dies if psql exits with a signal.
+
+=over
+
+=item stdout => \$stdout
+
+If a scalar to write stdout to is passed as the keyword parameter 'stdout' it
+will be set to psql's stdout. If unset stdout is not redirected and will be
+printed to Perl's stdout unless psql is called in array context, in which case
+it's captured and returned.
+
+=item stderr => \$stderr
+
+Same as 'stdout' but gets stderr. If the same scalar is passed for both stdout
+and stderr the results may be interleaved unpredictably.
+
+=item on_error_stop => 1
+
+By default psql_expert invokes psql with ON_ERROR_STOP=1 set so it will
+stop executing SQL at the first error and return exit code 2. If you want
+to ignore errors, pass 0 to on_error_stop.
+
+=item on_error_die => 0
+
+By default psql_expert returns psql's result code. Pass on_error_die to instead
+die with an informative message.
+
+=item timeout => 'interval'
+
+Set a timeout for the psql call as an interval accepted by IPC::Run::timer.
+Integer seconds is fine. psql_expert dies with an exception on timeout unless
+the timed_out parameter is passed.
+
+=item timed_out => \$timed_out
+
+Keyword parameter. Pass a scalar reference and it'll be set to true if the psql
+call timed out instead of dying. Has no effect unless 'timeout' is set.
+
+=item extra_params => ['--single-transaction']
+
+Pass additional parameters to psql. Must be an arrayref.
+
+=back
+
+e.g.
+
+ my ($stdout, $stderr, $timed_out);
+ my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'])
+
+will set $cmdret to undef and $timed_out to a true value.
+
+ $psql_expert('postgres', $sql, on_error_die => 1);
+
+dies with an informative message if $sql fails.
+
+=cut
+
+sub psql_expert
+{
+ my ($self, $dbname, $sql, %params) = @_;
+
+ my $stdout = $params{stdout};
+ my $stderr = $params{stderr};
+ my $timeout = undef;
+ my $timeout_exception = 'psql timed out';
+ my @psql_params =
+ ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
+
+ # If the caller wants an array and hasn't passed stdout/stderr
+ # references, allocate temporary ones to capture them so we
+ # can return them. Otherwise we won't redirect them at all.
+ if (wantarray)
+ {
+ if (!defined($stdout))
+ {
+ my $temp_stdout = "";
+ $stdout = \$temp_stdout;
+ }
+ if (!defined($stderr))
+ {
+ my $temp_stderr = "";
+ $stderr = \$temp_stderr;
+ }
+ }
+
+ $params{on_error_stop} = 1 unless defined $params{on_error_stop};
+ $params{on_error_die} = 0 unless defined $params{on_error_die};
+
+ push @psql_params, '-v', 'ON_ERROR_STOP=1' if $params{on_error_stop};
+ push @psql_params, @{ $params{extra_params} }
+ if defined $params{extra_params};
+
+ $timeout =
+ IPC::Run::timeout($params{timeout}, exception => $timeout_exception)
+ if (defined($params{timeout}));
+
+ # IPC::Run would otherwise append to existing contents:
+ $$stdout = "" if ref($stdout);
+ $$stderr = "" if ref($stderr);
+
+ my $ret;
+
+ # Perl's exception handling is ... interesting. Especially since we have to
+ # support 5.8.8. So we hide that from the caller, returning true/false for
+ # timeout instead. See
+ # http://search.cpan.org/~ether/Try-Tiny-0.24/lib/Try/Tiny.pm for
+ # background.
+ my $error = do
+ {
+ local $@;
+ eval {
+ my @ipcrun_opts = (\@psql_params, '<', \$sql);
+ push @ipcrun_opts, '>', $stdout if defined $stdout;
+ push @ipcrun_opts, '2>', $stderr if defined $stderr;
+ push @ipcrun_opts, $timeout if defined $timeout;
+
+ IPC::Run::run @ipcrun_opts;
+ $ret = $?;
+ };
+ my $exc_save = $@;
+ if ($exc_save)
+ {
+ # IPC::Run::run threw an exception. re-throw unless it's a
+ # timeout, which we'll handle by testing is_expired
+ if (blessed($exc_save) || $exc_save ne $timeout_exception)
+ {
+ print
+"Exception from IPC::Run::run when invoking psql: '$exc_save'\n";
+ die $exc_save;
+ }
+ else
+ {
+ $ret = undef;
+
+ die
+ "Got timeout exception '$exc_save' but timer not expired?!"
+ unless $timeout->is_expired;
+
+ if (defined($params{timed_out}))
+ {
+ ${ $params{timed_out} } = 1;
+ }
+ else
+ {
+ die
+"psql timed out while running '@psql_params', stderr '$$stderr'";
+ }
+ }
+ }
+ };
+
+ if (defined $$stdout)
+ {
+ chomp $$stdout;
+ $$stdout =~ s/\r//g if $Config{osname} eq 'msys';
+ }
+
+ if (defined $$stderr)
+ {
+ chomp $$stderr;
+ $$stderr =~ s/\r//g if $Config{osname} eq 'msys';
+ }
+
+ # See http://perldoc.perl.org/perlvar.html#%24CHILD_ERROR
+ # We don't use IPC::Run::Simple to limit dependencies.
+ #
+ # We always die on signal. If someone wants to capture signals
+ # to psql we can return it with another reference out parameter.
+ die "psql exited with signal "
+ . ($ret & 127)
+ . ": '$$stderr' while running '@psql_params'"
+ if $ret & 127;
+ die "psql exited with core dump: '$$stderr' while running '@psql_params'"
+ if $ret & 128;
+ $ret = $ret >> 8;
+
+ if ($ret && $params{on_error_die})
+ {
+ die
+"psql command line syntax error or internal error: '$$stderr' while running '@psql_params'"
+ if $ret == 1;
+ die "psql connection error: '$$stderr' while running '@psql_params'"
+ if $ret == 2;
+ die
+"error when running passed SQL: '$$stderr' while running '@psql_params'"
+ if $ret == 3;
+ die
+"unexpected error code $ret from psql: '$$stderr' while running '@psql_params'";
+ }
+
+ return $ret;
+}
+
=pod
=item $node->poll_query_until(dbname, query)
--
2.1.0
0006-TAP-Rename-psql_expert-to-psql-and-use-it-in-all-tes.patchtext/x-patch; charset=US-ASCII; name=0006-TAP-Rename-psql_expert-to-psql-and-use-it-in-all-tes.patchDownload
From e61b696033698c60df4186d5cb939621124ea09a Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Thu, 3 Mar 2016 13:40:53 +0800
Subject: [PATCH 6/7] TAP: Rename psql_expert to psql and use it in all tests
Make TAP tests check the exit code of psql by default, modifying
them in a few places where failures are actually expected.
---
src/bin/pg_basebackup/t/010_pg_basebackup.pl | 22 ++++-----
src/bin/pgbench/t/001_pgbench.pl | 2 +-
src/bin/scripts/t/010_clusterdb.pl | 2 +-
src/bin/scripts/t/030_createlang.pl | 2 +-
src/bin/scripts/t/050_dropdb.pl | 2 +-
src/bin/scripts/t/070_dropuser.pl | 2 +-
src/bin/scripts/t/090_reindexdb.pl | 2 +-
src/test/modules/commit_ts/t/001_base.pl | 8 +--
src/test/modules/commit_ts/t/002_standby.pl | 22 +++++----
src/test/modules/commit_ts/t/003_standby_2.pl | 20 ++++----
src/test/perl/PostgresNode.pm | 71 +++++++--------------------
src/test/recovery/t/001_stream_rep.pl | 20 ++------
src/test/recovery/t/002_archiving.pl | 10 ++--
src/test/recovery/t/003_recovery_targets.pl | 24 ++++-----
src/test/recovery/t/004_timeline_switch.pl | 10 ++--
src/test/recovery/t/005_replay_delay.pl | 10 ++--
16 files changed, 95 insertions(+), 134 deletions(-)
diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
index a275077..b68ffad 100644
--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl
+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
@@ -112,9 +112,9 @@ SKIP:
symlink "$tempdir", $shorter_tempdir;
mkdir "$tempdir/tblspc1";
- $node->psql('postgres',
+ $node->psql_check('postgres',
"CREATE TABLESPACE tblspc1 LOCATION '$shorter_tempdir/tblspc1';");
- $node->psql('postgres', "CREATE TABLE test1 (a int) TABLESPACE tblspc1;");
+ $node->psql_check('postgres', "CREATE TABLE test1 (a int) TABLESPACE tblspc1;");
$node->command_ok([ 'pg_basebackup', '-D', "$tempdir/tarbackup2", '-Ft' ],
'tar format with tablespaces');
ok(-f "$tempdir/tarbackup2/base.tar", 'backup tar was created');
@@ -140,9 +140,9 @@ SKIP:
closedir $dh;
mkdir "$tempdir/tbl=spc2";
- $node->psql('postgres', "DROP TABLE test1;");
- $node->psql('postgres', "DROP TABLESPACE tblspc1;");
- $node->psql('postgres',
+ $node->psql_check('postgres', "DROP TABLE test1;");
+ $node->psql_check('postgres', "DROP TABLESPACE tblspc1;");
+ $node->psql_check('postgres',
"CREATE TABLESPACE tblspc2 LOCATION '$shorter_tempdir/tbl=spc2';");
$node->command_ok(
[ 'pg_basebackup', '-D', "$tempdir/backup3", '-Fp',
@@ -150,15 +150,15 @@ SKIP:
'mapping tablespace with = sign in path');
ok(-d "$tempdir/tbackup/tbl=spc2",
'tablespace with = sign was relocated');
- $node->psql('postgres', "DROP TABLESPACE tblspc2;");
+ $node->psql_check('postgres', "DROP TABLESPACE tblspc2;");
mkdir "$tempdir/$superlongname";
- $node->psql('postgres',
+ $node->psql_check('postgres',
"CREATE TABLESPACE tblspc3 LOCATION '$tempdir/$superlongname';");
$node->command_ok(
[ 'pg_basebackup', '-D', "$tempdir/tarbackup_l3", '-Ft' ],
'pg_basebackup tar with long symlink target');
- $node->psql('postgres', "DROP TABLESPACE tblspc3;");
+ $node->psql_check('postgres', "DROP TABLESPACE tblspc3;");
}
$node->command_ok([ 'pg_basebackup', '-D', "$tempdir/backupR", '-R' ],
@@ -199,9 +199,9 @@ $node->command_fails(
'slot1' ],
'pg_basebackup fails with nonexistent replication slot');
-$node->psql('postgres',
+$node->psql_check('postgres',
q{SELECT * FROM pg_create_physical_replication_slot('slot1')});
-my $lsn = $node->psql('postgres',
+my $lsn = $node->psql_check('postgres',
q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot1'}
);
is($lsn, '', 'restart LSN of new slot is null');
@@ -209,7 +209,7 @@ $node->command_ok(
[ 'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X',
'stream', '-S', 'slot1' ],
'pg_basebackup -X stream with replication slot runs');
-$lsn = $node->psql('postgres',
+$lsn = $node->psql_check('postgres',
q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot1'}
);
like($lsn, qr!^0/[0-9A-Z]{7,8}$!, 'restart LSN of slot has advanced');
diff --git a/src/bin/pgbench/t/001_pgbench.pl b/src/bin/pgbench/t/001_pgbench.pl
index 88e83ab..ba4d751 100644
--- a/src/bin/pgbench/t/001_pgbench.pl
+++ b/src/bin/pgbench/t/001_pgbench.pl
@@ -12,7 +12,7 @@ use Test::More tests => 3;
my $node = get_new_node('main');
$node->init;
$node->start;
-$node->psql('postgres',
+$node->psql_check('postgres',
'CREATE UNLOGGED TABLE oid_tbl () WITH OIDS; '
. 'ALTER TABLE oid_tbl ADD UNIQUE (oid);');
my $script = $node->basedir . '/pgbench_script';
diff --git a/src/bin/scripts/t/010_clusterdb.pl b/src/bin/scripts/t/010_clusterdb.pl
index 11d678a..0665262 100644
--- a/src/bin/scripts/t/010_clusterdb.pl
+++ b/src/bin/scripts/t/010_clusterdb.pl
@@ -21,7 +21,7 @@ $node->issues_sql_like(
$node->command_fails([ 'clusterdb', '-t', 'nonexistent' ],
'fails with nonexistent table');
-$node->psql('postgres',
+$node->psql_check('postgres',
'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a); CLUSTER test1 USING test1x'
);
$node->issues_sql_like(
diff --git a/src/bin/scripts/t/030_createlang.pl b/src/bin/scripts/t/030_createlang.pl
index 38e3516..783db45 100644
--- a/src/bin/scripts/t/030_createlang.pl
+++ b/src/bin/scripts/t/030_createlang.pl
@@ -16,7 +16,7 @@ $node->start;
$node->command_fails([ 'createlang', 'plpgsql' ],
'fails if language already exists');
-$node->psql('postgres', 'DROP EXTENSION plpgsql');
+$node->psql_check('postgres', 'DROP EXTENSION plpgsql');
$node->issues_sql_like(
[ 'createlang', 'plpgsql' ],
qr/statement: CREATE EXTENSION "plpgsql"/,
diff --git a/src/bin/scripts/t/050_dropdb.pl b/src/bin/scripts/t/050_dropdb.pl
index fb4f656..31ab9de 100644
--- a/src/bin/scripts/t/050_dropdb.pl
+++ b/src/bin/scripts/t/050_dropdb.pl
@@ -13,7 +13,7 @@ my $node = get_new_node('main');
$node->init;
$node->start;
-$node->psql('postgres', 'CREATE DATABASE foobar1');
+$node->psql_check('postgres', 'CREATE DATABASE foobar1');
$node->issues_sql_like(
[ 'dropdb', 'foobar1' ],
qr/statement: DROP DATABASE foobar1/,
diff --git a/src/bin/scripts/t/070_dropuser.pl b/src/bin/scripts/t/070_dropuser.pl
index 22079f6..8d25374 100644
--- a/src/bin/scripts/t/070_dropuser.pl
+++ b/src/bin/scripts/t/070_dropuser.pl
@@ -13,7 +13,7 @@ my $node = get_new_node('main');
$node->init;
$node->start;
-$node->psql('postgres', 'CREATE ROLE foobar1');
+$node->psql_check('postgres', 'CREATE ROLE foobar1');
$node->issues_sql_like(
[ 'dropuser', 'foobar1' ],
qr/statement: DROP ROLE foobar1/,
diff --git a/src/bin/scripts/t/090_reindexdb.pl b/src/bin/scripts/t/090_reindexdb.pl
index 7f57af8..437fbae 100644
--- a/src/bin/scripts/t/090_reindexdb.pl
+++ b/src/bin/scripts/t/090_reindexdb.pl
@@ -20,7 +20,7 @@ $node->issues_sql_like(
qr/statement: REINDEX DATABASE postgres;/,
'SQL REINDEX run');
-$node->psql('postgres',
+$node->psql_check('postgres',
'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a);');
$node->issues_sql_like(
[ 'reindexdb', '-t', 'test1', 'postgres' ],
diff --git a/src/test/modules/commit_ts/t/001_base.pl b/src/test/modules/commit_ts/t/001_base.pl
index 122b515..f227186 100644
--- a/src/test/modules/commit_ts/t/001_base.pl
+++ b/src/test/modules/commit_ts/t/001_base.pl
@@ -13,17 +13,17 @@ $node->append_conf('postgresql.conf', 'track_commit_timestamp = on');
$node->start;
# Create a table, compare "now()" to the commit TS of its xmin
-$node->psql('postgres', 'create table t as select now from (select now(), pg_sleep(1)) f');
-my $true = $node->psql('postgres',
+$node->psql_check('postgres', 'create table t as select now from (select now(), pg_sleep(1)) f');
+my $true = $node->psql_check('postgres',
'select t.now - ts.* < \'1s\' from t, pg_class c, pg_xact_commit_timestamp(c.xmin) ts where relname = \'t\'');
is($true, 't', 'commit TS is set');
-my $ts = $node->psql('postgres',
+my $ts = $node->psql_check('postgres',
'select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = \'t\'');
# Verify that we read the same TS after crash recovery
$node->stop('immediate');
$node->start;
-my $recovered_ts = $node->psql('postgres',
+my $recovered_ts = $node->psql_check('postgres',
'select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = \'t\'');
is($recovered_ts, $ts, 'commit TS remains after crash recovery');
diff --git a/src/test/modules/commit_ts/t/002_standby.pl b/src/test/modules/commit_ts/t/002_standby.pl
index 5cc2501..04fde00 100644
--- a/src/test/modules/commit_ts/t/002_standby.pl
+++ b/src/test/modules/commit_ts/t/002_standby.pl
@@ -4,7 +4,7 @@ use strict;
use warnings;
use TestLib;
-use Test::More tests => 2;
+use Test::More tests => 4;
use PostgresNode;
my $bkplabel = 'backup';
@@ -25,31 +25,33 @@ $standby->start;
for my $i (1 .. 10)
{
- $master->psql('postgres', "create table t$i()");
+ $master->psql_check('postgres', "create table t$i()");
}
-my $master_ts = $master->psql('postgres',
+my $master_ts = $master->psql_check('postgres',
qq{SELECT ts.* FROM pg_class, pg_xact_commit_timestamp(xmin) AS ts WHERE relname = 't10'});
-my $master_lsn = $master->psql('postgres',
+my $master_lsn = $master->psql_check('postgres',
'select pg_current_xlog_location()');
$standby->poll_query_until('postgres',
qq{SELECT '$master_lsn'::pg_lsn <= pg_last_xlog_replay_location()})
or die "slave never caught up";
-my $standby_ts = $standby->psql('postgres',
+my $standby_ts = $standby->psql_check('postgres',
qq{select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = 't10'});
is($master_ts, $standby_ts, "standby gives same value as master");
$master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
$master->restart;
-$master->psql('postgres', 'checkpoint');
-$master_lsn = $master->psql('postgres',
+$master->psql_check('postgres', 'checkpoint');
+$master_lsn = $master->psql_check('postgres',
'select pg_current_xlog_location()');
$standby->poll_query_until('postgres',
qq{SELECT '$master_lsn'::pg_lsn <= pg_last_xlog_replay_location()})
or die "slave never caught up";
-$standby->psql('postgres', 'checkpoint');
+$standby->psql_check('postgres', 'checkpoint');
# This one should raise an error now
-$standby_ts = $standby->psql('postgres',
+my ($ret, $standby_ts_stdout, $standby_ts_stderr) = $standby->psql('postgres',
'select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = \'t10\'');
-is($standby_ts, '', "standby gives no value when master turned feature off");
+is($ret, 3, 'standby errors when master turned feature off');
+is($standby_ts_stdout, '', "standby gives no value when master turned feature off");
+like($standby_ts_stderr, qr/could not get commit timestamp data/, 'expected error when master turned feature off');
diff --git a/src/test/modules/commit_ts/t/003_standby_2.pl b/src/test/modules/commit_ts/t/003_standby_2.pl
index fadb6a2..ba94848 100644
--- a/src/test/modules/commit_ts/t/003_standby_2.pl
+++ b/src/test/modules/commit_ts/t/003_standby_2.pl
@@ -4,7 +4,7 @@ use strict;
use warnings;
use TestLib;
-use Test::More tests => 2;
+use Test::More tests => 4;
use PostgresNode;
my $bkplabel = 'backup';
@@ -24,23 +24,25 @@ $standby->start;
for my $i (1 .. 10)
{
- $master->psql('postgres', "create table t$i()");
+ $master->psql_check('postgres', "create table t$i()");
}
$master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
$master->restart;
-$master->psql('postgres', 'checkpoint');
-my $master_lsn = $master->psql('postgres',
+$master->psql_check('postgres', 'checkpoint');
+my $master_lsn = $master->psql_check('postgres',
'select pg_current_xlog_location()');
$standby->poll_query_until('postgres',
qq{SELECT '$master_lsn'::pg_lsn <= pg_last_xlog_replay_location()})
or die "slave never caught up";
-$standby->psql('postgres', 'checkpoint');
+$standby->psql_check('postgres', 'checkpoint');
$standby->restart;
-my $standby_ts = $standby->psql('postgres',
+my ($psql_ret, $standby_ts_stdout, $standby_ts_stderr) = $standby->psql('postgres',
qq{SELECT ts.* FROM pg_class, pg_xact_commit_timestamp(xmin) AS ts WHERE relname = 't10'});
-is($standby_ts, '', "standby does not return a value after restart");
+is($psql_ret, 3, 'expect error when getting commit timestamp after restart');
+is($standby_ts_stdout, '', "standby does not return a value after restart");
+like($standby_ts_stderr, qr/could not get commit timestamp data/, 'expected err msg after restart');
$master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
$master->restart;
@@ -50,7 +52,7 @@ $master->restart;
system_or_bail('pg_ctl', '-w', '-D', $standby->data_dir, 'promote');
$standby->poll_query_until('postgres', "SELECT pg_is_in_recovery() <> true");
-$standby->psql('postgres', "create table t11()");
-$standby_ts = $standby->psql('postgres',
+$standby->psql_check('postgres', "create table t11()");
+my $standby_ts = $standby->psql_check('postgres',
qq{SELECT ts.* FROM pg_class, pg_xact_commit_timestamp(xmin) AS ts WHERE relname = 't11'});
isnt($standby_ts, '', "standby gives valid value ($standby_ts) after promotion");
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 115b9e3..c637a9b 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -30,7 +30,7 @@ PostgresNode - class representing PostgreSQL server instance
# as well as the psql exit code. Pass some extra psql
# options. If there's an error from psql raise an exception.
my ($stdout, $stderr, $timed_out);
- my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
+ my $cmdret = $psql('postgres', 'SELECT pg_sleep(60)',
stdout => \$stdout, stderr => \$stderr,
timeout => 30, timed_out => \$timed_out,
extra_params => ['--single-transaction'],
@@ -791,56 +791,13 @@ sub teardown_node
$self->stop('immediate');
}
-=pod
-
-=item $node->psql(dbname, sql)
-
-Run a query with psql and return stdout if psql returns with no error.
-
-psql is run in unaligned tuples-only quiet mode with psqlrc disabled so simple
-queries will just return the result row(s) with fields separated by commas.
-
-If any stderr output is generated it is printed and discarded.
-
-Nonzero return codes from psql are ignored and discarded.
-
-Use psql_expert for more control.
-
-=cut
-
-sub psql
-{
- my ($self, $dbname, $sql) = @_;
-
- my ($stdout, $stderr);
-
- my $name = $self->name;
- print("### Running SQL command on node \"$name\": $sql\n");
-
- # Run the command, ignoring errors
- $self->psql_expert(
- $dbname, $sql,
- stdout => \$stdout,
- stderr => \$stderr,
- on_error_die => 0,
- on_error_stop => 0);
-
- if ($stderr ne "")
- {
- print "#### Begin standard error\n";
- print $stderr;
- print "\n#### End standard error\n";
- }
- return $stdout;
-}
-
=pod $node->psql_check($dbname, $sql) => stdout
Invoke 'psql' to run 'sql' on 'dbname' and return its stdout on success.
Die if the SQL produces an error. Runs with ON_ERROR_STOP set.
Takes optional extra params like timeout and timed_out parameters with the same
-options as psql_expert.
+options as psql.
=cut
@@ -850,7 +807,7 @@ sub psql_check
my ($stdout, $stderr);
- my $ret = $self->psql_expert(
+ my $ret = $self->psql(
$dbname, $sql,
%params,
stdout => \$stdout,
@@ -869,12 +826,15 @@ sub psql_check
return $stdout;
}
-=pod $node->psql_expert($dbname, $sql, %params) => psql_retval
+=pod $node->psql($dbname, $sql, %params) => psql_retval
Invoke 'psql' to run 'sql' on 'dbname' and return the return value from
psql, which is run with on_error_stop by default so that it will stop running
sql and return 3 if the passed SQL results in an error.
+As a convenience, if psql is called in array context it returns a ($retval,
+$stdout, $stderr) tuple.
+
psql is invoked in tuples-only unaligned mode with reading of psqlrc disabled. That
may be overridden by passing extra psql parameters.
@@ -900,19 +860,19 @@ and stderr the results may be interleaved unpredictably.
=item on_error_stop => 1
-By default psql_expert invokes psql with ON_ERROR_STOP=1 set so it will
+By default psql invokes psql with ON_ERROR_STOP=1 set so it will
stop executing SQL at the first error and return exit code 2. If you want
to ignore errors, pass 0 to on_error_stop.
=item on_error_die => 0
-By default psql_expert returns psql's result code. Pass on_error_die to instead
+By default psql returns psql's result code. Pass on_error_die to instead
die with an informative message.
=item timeout => 'interval'
Set a timeout for the psql call as an interval accepted by IPC::Run::timer.
-Integer seconds is fine. psql_expert dies with an exception on timeout unless
+Integer seconds is fine. psql dies with an exception on timeout unless
the timed_out parameter is passed.
=item timed_out => \$timed_out
@@ -942,7 +902,7 @@ dies with an informative message if $sql fails.
=cut
-sub psql_expert
+sub psql
{
my ($self, $dbname, $sql, %params) = @_;
@@ -1075,7 +1035,14 @@ sub psql_expert
"unexpected error code $ret from psql: '$$stderr' while running '@psql_params'";
}
- return $ret;
+ if (wantarray)
+ {
+ return ($ret, $$stdout, $$stderr);
+ }
+ else
+ {
+ return $ret;
+ }
}
=pod
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 7dcca65..54de159 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -31,7 +31,7 @@ $node_standby_2->init_from_backup($node_standby_1, $backup_name,
$node_standby_2->start;
# Create some content on master and check its presence in standby 1
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a");
# Wait for standbys to catch up
@@ -47,24 +47,14 @@ $node_standby_1->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby 2 to catch up";
my $result =
- $node_standby_1->psql('postgres', "SELECT count(*) FROM tab_int");
+ $node_standby_1->psql_check('postgres', "SELECT count(*) FROM tab_int");
print "standby 1: $result\n";
is($result, qq(1002), 'check streamed content on standby 1');
-$result = $node_standby_2->psql('postgres', "SELECT count(*) FROM tab_int");
+$result = $node_standby_2->psql_check('postgres', "SELECT count(*) FROM tab_int");
print "standby 2: $result\n";
is($result, qq(1002), 'check streamed content on standby 2');
# Check that only READ-only queries can run on standbys
-$node_standby_1->command_fails(
- [ 'psql', '-A',
- '-t', '--no-psqlrc',
- '-d', $node_standby_1->connstr,
- '-c', "INSERT INTO tab_int VALUES (1)" ],
- 'Read-only queries on standby 1');
-$node_standby_2->command_fails(
- [ 'psql', '-A',
- '-t', '--no-psqlrc',
- '-d', $node_standby_2->connstr,
- '-c', "INSERT INTO tab_int VALUES (1)" ],
- 'Read-only queries on standby 2');
+is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'), 3, 'Read-only queries on standby 1');
+is($node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'), 3, 'Read-only queries on standby 2');
diff --git a/src/test/recovery/t/002_archiving.pl b/src/test/recovery/t/002_archiving.pl
index 67bb7df..d2eb01b 100644
--- a/src/test/recovery/t/002_archiving.pl
+++ b/src/test/recovery/t/002_archiving.pl
@@ -30,16 +30,16 @@ wal_retrieve_retry_interval = '100ms'
$node_standby->start;
# Create some content on master
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
my $current_lsn =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
# Force archiving of WAL file to make it present on master
-$node_master->psql('postgres', "SELECT pg_switch_xlog()");
+$node_master->psql_check('postgres', "SELECT pg_switch_xlog()");
# Add some more content, it should not be present on standby
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(1001,2000))");
# Wait until necessary replay has been done on standby
@@ -48,5 +48,5 @@ my $caughtup_query =
$node_standby->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby to catch up";
-my $result = $node_standby->psql('postgres', "SELECT count(*) FROM tab_int");
+my $result = $node_standby->psql_check('postgres', "SELECT count(*) FROM tab_int");
is($result, qq(1000), 'check content from archives');
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index a0d8b45..7dcb734 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -38,7 +38,7 @@ sub test_recovery_standby
# Create some content on master and check its presence in standby
my $result =
- $node_standby->psql('postgres', "SELECT count(*) FROM tab_int");
+ $node_standby->psql_check('postgres', "SELECT count(*) FROM tab_int");
is($result, qq($num_rows), "check standby content for $test_name");
# Stop standby node
@@ -54,40 +54,40 @@ $node_master->start;
# Create data before taking the backup, aimed at testing
# recovery_target = 'immediate'
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
my $lsn1 =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
# Take backup from which all operations will be run
$node_master->backup('my_backup');
# Insert some data with used as a replay reference, with a recovery
# target TXID.
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(1001,2000))");
-my $recovery_txid = $node_master->psql('postgres', "SELECT txid_current()");
+my $recovery_txid = $node_master->psql_check('postgres', "SELECT txid_current()");
my $lsn2 =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
# More data, with recovery target timestamp
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(2001,3000))");
-my $recovery_time = $node_master->psql('postgres', "SELECT now()");
+my $recovery_time = $node_master->psql_check('postgres', "SELECT now()");
my $lsn3 =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
# Even more data, this time with a recovery target name
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(3001,4000))");
my $recovery_name = "my_target";
my $lsn4 =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
$node_master->psql_check('postgres',
"SELECT pg_create_restore_point('$recovery_name');");
# Force archiving of WAL file
-$node_master->psql('postgres', "SELECT pg_switch_xlog()");
+$node_master->psql_check('postgres', "SELECT pg_switch_xlog()");
# Test recovery targets
my @recovery_params = ("recovery_target = 'immediate'");
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index a180b94..e549680 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -30,10 +30,10 @@ $node_standby_2->init_from_backup($node_master, $backup_name,
$node_standby_2->start;
# Create some content on master
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
my $until_lsn =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
# Wait until standby has replayed enough data on standby 1
my $caughtup_query =
@@ -61,15 +61,15 @@ $node_standby_2->restart;
# to exit recovery first before moving on with the test.
$node_standby_1->poll_query_until('postgres',
"SELECT pg_is_in_recovery() <> true");
-$node_standby_1->psql('postgres',
+$node_standby_1->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(1001,2000))");
$until_lsn =
- $node_standby_1->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_standby_1->psql_check('postgres', "SELECT pg_current_xlog_location();");
$caughtup_query =
"SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
$node_standby_2->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby to catch up";
my $result =
- $node_standby_2->psql('postgres', "SELECT count(*) FROM tab_int");
+ $node_standby_2->psql_check('postgres', "SELECT count(*) FROM tab_int");
is($result, qq(2000), 'check content of standby 2');
diff --git a/src/test/recovery/t/005_replay_delay.pl b/src/test/recovery/t/005_replay_delay.pl
index 401d17b..d8e20c0 100644
--- a/src/test/recovery/t/005_replay_delay.pl
+++ b/src/test/recovery/t/005_replay_delay.pl
@@ -11,7 +11,7 @@ $node_master->init(allows_streaming => 1);
$node_master->start;
# And some content
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,10) AS a");
# Take backup
@@ -30,20 +30,20 @@ $node_standby->start;
# Make new content on master and check its presence in standby
# depending on the delay of 2s applied above.
-$node_master->psql('postgres',
+$node_master->psql_check('postgres',
"INSERT INTO tab_int VALUES (generate_series(11,20))");
sleep 1;
# Here we should have only 10 rows
-my $result = $node_standby->psql('postgres', "SELECT count(*) FROM tab_int");
+my $result = $node_standby->psql_check('postgres', "SELECT count(*) FROM tab_int");
is($result, qq(10), 'check content with delay of 1s');
# Now wait for replay to complete on standby
my $until_lsn =
- $node_master->psql('postgres', "SELECT pg_current_xlog_location();");
+ $node_master->psql_check('postgres', "SELECT pg_current_xlog_location();");
my $caughtup_query =
"SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
$node_standby->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby to catch up";
-$result = $node_standby->psql('postgres', "SELECT count(*) FROM tab_int");
+$result = $node_standby->psql_check('postgres', "SELECT count(*) FROM tab_int");
is($result, qq(20), 'check content with delay of 2s');
--
2.1.0
0007-TAP-Add-support-for-taking-filesystem-level-backups.patchtext/x-patch; charset=US-ASCII; name=0007-TAP-Add-support-for-taking-filesystem-level-backups.patchDownload
From 91d6718c9a31c9f7e50090867a8784072a963a89 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Tue, 1 Mar 2016 21:21:25 +0800
Subject: [PATCH 7/7] TAP: Add support for taking filesystem level backups
---
src/test/perl/PostgresNode.pm | 88 +++++++++++++++++++++++++++++++++++++------
1 file changed, 76 insertions(+), 12 deletions(-)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index c637a9b..ba74aa5 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -461,11 +461,86 @@ sub backup
my $port = $self->port;
my $name = $self->name;
- print "# Taking backup $backup_name from node \"$name\"\n";
+ print "# Taking pg_basebackup $backup_name from node \"$name\"\n";
TestLib::system_or_bail("pg_basebackup -D $backup_path -p $port -x");
print "# Backup finished\n";
}
+=item $node->backup_fs_hot(backup_name)
+
+Create a backup with a filesystem level copy in $node->backup_dir,
+including transaction logs. Archiving must be enabled as pg_start_backup
+and pg_stop_backup are used. This is not checked or enforced.
+
+The backup name is passed as the backup label to pg_start_backup.
+
+=cut
+
+sub backup_fs_hot
+{
+ my ($self, $backup_name) = @_;
+ $self->_backup_fs($backup_name, 1);
+}
+
+=item $node->backup_fs_cold(backup_name)
+
+Create a backup with a filesystem level copy in $node->backup dir,
+including transaction logs. The server must be stopped as no
+attempt to handle concurrent writes is made.
+
+Use backup or backup_fs_hot if you want to back up a running
+server.
+
+=cut
+
+sub backup_fs_cold
+{
+ my ($self, $backup_name) = @_;
+ $self->_backup_fs($backup_name, 0);
+}
+
+
+# Common sub of backup_fs_hot and backup_fs_cold
+sub _backup_fs
+{
+ my ($self, $backup_name, $hot) = @_;
+ my $backup_path = $self->backup_dir . '/' . $backup_name;
+ my $port = $self->port;
+ my $name = $self->name;
+
+ print
+ "# Taking filesystem level backup $backup_name from node \"$name\"\n";
+
+ if ($hot)
+ {
+ my $stdout = $self->psql_check('postgres',
+ "SELECT * FROM pg_start_backup('$backup_name');");
+ print "# pg_start_backup: $stdout\n";
+ }
+
+ RecursiveCopy::copypath(
+ $self->data_dir,
+ $backup_path,
+ filterfn => sub {
+ my $src = shift;
+ return $src !~ /\/pg_log\// && $src !~ /\/postmaster.pid$/;
+ });
+
+ if ($hot)
+ {
+ # We ignore pg_stop_backup's return value. We also assume archiving
+ # is enabled; otherwise the caller will have to copy the remaining
+ # segments.
+ my $stdout =
+ $self->psql_check('postgres', 'SELECT * FROM pg_stop_backup();');
+ print "# pg_stop_backup: $stdout\n";
+ }
+
+ print "# Backup finished\n";
+}
+
+
+
=pod
=item $node->init_from_backup(root_node, backup_name)
@@ -888,17 +963,6 @@ Pass additional parameters to psql. Must be an arrayref.
e.g.
- my ($stdout, $stderr, $timed_out);
- my $cmdret = $psql_expert('postgres', 'SELECT pg_sleep(60)',
- stdout => \$stdout, stderr => \$stderr,
- timeout => 30, timed_out => \$timed_out,
- extra_params => ['--single-transaction'])
-
-will set $cmdret to undef and $timed_out to a true value.
-
- $psql_expert('postgres', $sql, on_error_die => 1);
-
-dies with an informative message if $sql fails.
=cut
--
2.1.0
On Thu, Mar 3, 2016 at 4:11 PM, Craig Ringer <craig@2ndquadrant.com> wrote:
The first three are simple fixes that should go in without fuss:
001 fixes the above 5.8.8 compat issue.
002 fixes another minor whoopsie, a syntax error in
src/test/recovery/t/003_recovery_targets.pl that never got noticed because
exit codes are ignored.
Those two are embarrassing things... However:
-$node_master->psql('postgres',
- "SELECT pg_create_restore_point('$recovery_name'");
+$node_master->psql_check('postgres',
+ "SELECT pg_create_restore_point('$recovery_name');");
In 0002 this is not correct, psql_check is part of a routine you
introduce later on.
003 runs perltidy on PostgresNode.pm to bring it back into conformance after
the recovery tests commit.
No objections to that.
The rest are feature patches:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.
Adding perldoc to this module should be done separately, let's not mix
things. Applying a filter is useful as well to remove for example the
contents of pg_xlog, so no objections to it.
005 adds the new psql functions psql_expert and psql_check. Then 006 renames
psql_expert to psql and fixes up all call sites across the tree to use the
new interface. I found the bug in 002 as part of that process. I anticipate
that 005 and 006 would be squashed into one commit to master, but I've kept
them separate in my tree for easier review.
psql_check sounds wrong to me. I thought first that this triggers a
test. Why not psql_simple or psql_basic, or just keep psql.
007 adds PostgresNode support for hot and cold filesystem-level backups
using pg_start_backup and pg_stop_backup, which will be required for some
coming tests and are useful by themselves.
It would be nice as well to have a flag in _backup_fs to filter the
contents of pg_xlog at will. log_collector is not enabled in the nodes
deployed by PostgresNode, so why filtering it?
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3 March 2016 at 21:16, Michael Paquier <michael.paquier@gmail.com> wrote:
On Thu, Mar 3, 2016 at 4:11 PM, Craig Ringer <craig@2ndquadrant.com>
wrote:The first three are simple fixes that should go in without fuss:
001 fixes the above 5.8.8 compat issue.
002 fixes another minor whoopsie, a syntax error in
src/test/recovery/t/003_recovery_targets.pl that never got noticedbecause
exit codes are ignored.
Those two are embarrassing things... However: -$node_master->psql('postgres', - "SELECT pg_create_restore_point('$recovery_name'"); +$node_master->psql_check('postgres', + "SELECT pg_create_restore_point('$recovery_name');"); In 0002 this is not correct, psql_check is part of a routine you introduce later on.
Damn, I meant to fix that when I rebased it back in the history but forgot.
Thankyou.
The rest are feature patches:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.Adding perldoc to this module should be done separately, let's not mix
things. Applying a filter is useful as well to remove for example the
contents of pg_xlog, so no objections to it.
Eh, ok. I figured it was so trivial it didn't matter, but will split.
005 adds the new psql functions psql_expert and psql_check. Then 006
renames
psql_expert to psql and fixes up all call sites across the tree to use
the
new interface. I found the bug in 002 as part of that process. I
anticipate
that 005 and 006 would be squashed into one commit to master, but I've
kept
them separate in my tree for easier review.
psql_check sounds wrong to me. I thought first that this triggers a
test. Why not psql_simple or psql_basic, or just keep psql.
I guess I'm used to Python's subprocess.check_call so to me it's natural.
I want something that makes it clear that failure is a fatal error
condition, i.e. "do this in psql and if it produces an error, treat it like
you would any other error in Perl and die appropriately".
007 adds PostgresNode support for hot and cold filesystem-level backups
using pg_start_backup and pg_stop_backup, which will be required for some
coming tests and are useful by themselves.It would be nice as well to have a flag in _backup_fs to filter the
contents of pg_xlog at will.
i.e. copy without any pg_xlog at all? without_xlog => 1 ?
log_collector is not enabled in the nodes
deployed by PostgresNode, so why filtering it?
Because at the time I wrote that the test dirs were deleted unconditionally
and I didn't bother checking if we actually wrote anything to pg_log. Also,
because if it _does_ get enabled by someone I can't imagine why we'd want
to copy the logs.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Craig Ringer wrote:
On 3 March 2016 at 21:16, Michael Paquier <michael.paquier@gmail.com> wrote:
The rest are feature patches:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.Adding perldoc to this module should be done separately, let's not mix
things. Applying a filter is useful as well to remove for example the
contents of pg_xlog, so no objections to it.Eh, ok. I figured it was so trivial it didn't matter, but will split.
Please don't mess with this one.
psql_check sounds wrong to me. I thought first that this triggers a
test. Why not psql_simple or psql_basic, or just keep psql.I guess I'm used to Python's subprocess.check_call so to me it's natural.
I want something that makes it clear that failure is a fatal error
condition, i.e. "do this in psql and if it produces an error, treat it like
you would any other error in Perl and die appropriately".
Shrug. psql_check seemed reasonable to me for that.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer wrote:
The first three are simple fixes that should go in without fuss:
001 fixes the above 5.8.8 compat issue.
002 fixes another minor whoopsie, a syntax error in src/test/recovery/t/
003_recovery_targets.pl that never got noticed because exit codes are
ignored.003 runs perltidy on PostgresNode.pm to bring it back into conformance
after the recovery tests commit.The rest are feature patches:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.005 adds the new psql functions psql_expert and psql_check. Then 006
renames psql_expert to psql and fixes up all call sites across the tree to
use the new interface. I found the bug in 002 as part of that process. I
anticipate that 005 and 006 would be squashed into one commit to master,
but I've kept them separate in my tree for easier review.007 adds PostgresNode support for hot and cold filesystem-level backups
using pg_start_backup and pg_stop_backup, which will be required for some
coming tests and are useful by themselves.
Okay, so far I have pushed 0001 and 0002 squashed (commit 5bec1ad4648),
0003 (commit 7d9a4301c08), 0005 and 0006 squashed (commit 2c83f435a3de).
In the last one I chose to rename your psql_check to safe_psql and
tweaked a few other things, not worthy of individual mention. I think
the result should still work on Perl 5.8 though I didn't actually verify
that -- I don't think I made any changes that would affect portability.
I will be downloading your Dockerized stuff shortly while I still have a
convenient network connection, just in case.
Patches 0004 and 0007 remain. I think 0007 is uncontroversial; I
reworked 0004 a bit and gave it to someone else to finish a couple of
didn't I didn't quite like -- hopefully she'll be submitting a new
version soonish. Once we have that I'm happy to push them too.
I don't expect to be doing much more on the framework at this point as I
want to be able to get back to the code I had to enhance the framework in
order to test....
How come!?!?
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 4 March 2016 at 05:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Okay, so far I have pushed 0001 and 0002 squashed (commit 5bec1ad4648),
0003 (commit 7d9a4301c08), 0005 and 0006 squashed (commit 2c83f435a3de).
In the last one I chose to rename your psql_check to safe_psql and
tweaked a few other things, not worthy of individual mention. I think
the result should still work on Perl 5.8 though I didn't actually verify
that -- I don't think I made any changes that would affect portability.
I will be downloading your Dockerized stuff shortly while I still have a
convenient network connection, just in case.
Thanks very much. Your commit messages are much better too.
Patches 0004 and 0007 remain.
For readers who're not following closely that's the filtering support for
RecursiveCopy and the support for taking filesystem-level backups in
PostgresNode that uses it.
I think 0007 is uncontroversial; I
reworked 0004 a bit and gave it to someone else to finish a couple of
didn't I didn't quite like -- hopefully she'll be submitting a new
version soonish. Once we have that I'm happy to push them too.
Great, thanks.
I don't expect to be doing much more on the framework at this point as I
want to be able to get back to the code I had to enhance the framework in
order to test....How come!?!?
This yak grows hair faster than I can shave it ;)
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Fri, Mar 4, 2016 at 8:22 AM, Craig Ringer <craig@2ndquadrant.com> wrote:
On 4 March 2016 at 05:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Patches 0004 and 0007 remain.
For readers who're not following closely that's the filtering support for
RecursiveCopy and the support for taking filesystem-level backups in
PostgresNode that uses it.
I am still not sure that the filter for pg_log by default is that useful but...
I think 0007 is uncontroversial; I
reworked 0004 a bit and gave it to someone else to finish a couple of
didn't I didn't quite like -- hopefully she'll be submitting a new
version soonish. Once we have that I'm happy to push them too.Great, thanks.
No objections from here as well for the feature itself. I am glad to
see interest in extending the current infrastructure, and just
wondering what kind of tests are going to show up.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 4 March 2016 at 12:54, Michael Paquier <michael.paquier@gmail.com> wrote:
No objections from here as well for the feature itself. I am glad to
see interest in extending the current infrastructure, and just
wondering what kind of tests are going to show up.
The tests themselves really aren't that exciting. Create a slot, online
base-backup the node, start it in archive recovery, immediately fail over
to it or do some amount of writes first, do some more writes on it, then
read from the logical slot on the replica and prove that we can
successfully read the logical change change stream across the timeline
switch.
I'm now adding a module that goes a step further by exposing some functions
to SQL (test only, if you use it in production you're insane) to permit
creation of a logical slot on a replica and to advance that slot's
persistent data. It basically lets you advance the slot state on the
replica while it's in recovery so you can retain slot states copied with a
filesystem-level base backup and have the slots ready to use when you
promote the server.
I'm doing it to prove the concept of doing logical failover with an
extension and without full WAL-based failover slots in order to get the
timeline following for logical decoding patch committed separately to, and
prior to, failover slots. It's complicated enough that it needs a variety
of tests, but self-contained and non-intrusive enough to be a fairly easy
commit if it's shown to work well.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 4 March 2016 at 05:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Okay, so far I have pushed 0001 and 0002 squashed (commit 5bec1ad4648),
0003 (commit 7d9a4301c08), 0005 and 0006 squashed (commit 2c83f435a3de).
In the last one I chose to rename your psql_check to safe_psql and
tweaked a few other things, not worthy of individual mention.
I've just noticed that I failed to initialized $$timed_out to zero in
PostgresNode::psql if a ref is passed for it.
Fix attached.
The tests are proving useful already; I've shown that timeline following
works great with a filesystem-level copy, but that there are WAL retention
issues (unsurprisingly) when the backup is created without slots then the
slot state is synced over by an external client.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 4 March 2016 at 20:35, Craig Ringer <craig@2ndquadrant.com> wrote:
Fix attached.
Apparently I need a "remind me if you see the word attach in an email"
plugin. Sigh.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-Correctly-initialized-timed_out-if-passed.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-Correctly-initialized-timed_out-if-passed.patchDownload
From b7158678086f137eaa38782aadb51ec16c44871b Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Fri, 4 Mar 2016 20:40:43 +0800
Subject: [PATCH] TAP: Correctly initialized $timed_out if passed
---
src/test/perl/PostgresNode.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index f0f70ea..ea4fb63 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1025,6 +1025,8 @@ sub psql
IPC::Run::timeout($params{timeout}, exception => $timeout_exception)
if (defined($params{timeout}));
+ ${$params{timed_out}} = 0 if defined $params{timed_out};
+
# IPC::Run would otherwise append to existing contents:
$$stdout = "" if ref($stdout);
$$stderr = "" if ref($stderr);
--
2.1.0
Craig Ringer wrote:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.
I pushed this after some tinkering:
* filtering applies to all directory entries, not just files. So you
can filter a subdirectory, for example. If you have symlinks (which you
know this module will bomb out on) you can skip them too.
* The filter sub receives the path relative to the initial source dir,
rather than the absolute path. That part was just making it difficult
to use; in your POD example you were testing /pg_log/ as a regex, which
skipped the *files* but not the directory itself. Now you can do simply
"$var ne 'pg_log'". (Pallavi Sontakke implemented most of this.)
* Your test for "ref $filterfn" allowed any reference to be passed, not
just a code reference. I didn't test passing some other type of
reference; I assume you'd just get a very ugly error message.
Thanks,
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer wrote:
007 adds PostgresNode support for hot and cold filesystem-level backups
using pg_start_backup and pg_stop_backup, which will be required for some
coming tests and are useful by themselves.
Finally, pushed this one after rebasing on top of the changes in the
others. I think we're done here, at last ...
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10 March 2016 at 05:30, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Craig Ringer wrote:
004 allows filtering on RecursiveCopy by a predicate function. Needed for
filesystem level backups (007). It could probably be squashed with 007 if
desired.I pushed this after some tinkering:
* filtering applies to all directory entries, not just files. So you
can filter a subdirectory, for example. If you have symlinks (which you
know this module will bomb out on) you can skip them too.* The filter sub receives the path relative to the initial source dir,
rather than the absolute path. That part was just making it difficult
to use; in your POD example you were testing /pg_log/ as a regex, which
skipped the *files* but not the directory itself.
That was actually intentional, but I agree that your change is better.
Thanks for pushing.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 10 March 2016 at 09:53, Craig Ringer <craig@2ndquadrant.com> wrote:
Thanks for pushing.
I found a minor issue with the new psql method while writing tests for
failover slots. Patch attached.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
0001-TAP-Correctly-initialize-timed_out-if-passed.patchtext/x-patch; charset=US-ASCII; name=0001-TAP-Correctly-initialize-timed_out-if-passed.patchDownload
From c1ee29e350d9532ae9d536252a0ebefc136c4a39 Mon Sep 17 00:00:00 2001
From: Craig Ringer <craig@2ndquadrant.com>
Date: Fri, 4 Mar 2016 20:40:43 +0800
Subject: [PATCH] TAP: Correctly initialize $timed_out if passed
Corrects an oversight of mine in 2c83f435a3 (rework PostgresNode's psql
method) where the $timed_out reference var isn't correctly initialized.
This doesn't affect any existing test code in the tree.
---
src/test/perl/PostgresNode.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 090d48c..3a740ec 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1032,6 +1032,8 @@ sub psql
IPC::Run::timeout($params{timeout}, exception => $timeout_exception)
if (defined($params{timeout}));
+ ${$params{timed_out}} = 0 if defined $params{timed_out};
+
# IPC::Run would otherwise append to existing contents:
$$stdout = "" if ref($stdout);
$$stderr = "" if ref($stderr);
--
2.1.0
Craig Ringer wrote:
I found a minor issue with the new psql method while writing tests for
failover slots. Patch attached.
Thanks, pushed.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers