Rewriting the test of pg_upgrade as a TAP test
Hi all,
The $subject has been mentioned a couple of times already, the last
one being here:
/messages/by-id/20170401072814.GA2528448@tornado.leadboat.com
The code tree has to maintain now two set of scripts for the same
test: test.sh for all *nix platforms, and vcregress.pl for MSVC.
Attached is a patch to remove that and replace the existing test by a
TAP test. The size of the patch speaks by itself:
6 files changed, 98 insertions(+), 389 deletions(-)
I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.
I am parking that in the next commit fest.
Regards,
--
Michael
Attachments:
pgupgrade-tap-test.patchapplication/octet-stream; name=pgupgrade-tap-test.patchDownload
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index ecec0a60c7..b7651c5b33 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -452,7 +452,6 @@ $ENV{CONFIG}="Debug";
<userinput>vcregress isolationcheck</userinput>
<userinput>vcregress bincheck</userinput>
<userinput>vcregress recoverycheck</userinput>
-<userinput>vcregress upgradecheck</userinput>
</screen>
To change the schedule used (default is parallel), append it to the
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 8823288708..a78b39f3e5 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -35,9 +35,8 @@ clean distclean maintainer-clean:
pg_upgrade_dump_globals.sql \
pg_upgrade_dump_*.custom pg_upgrade_*.log
-check: test.sh all
- MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install
+check:
+ $(prove_check)
-# disabled because it upsets the build farm
-#installcheck: test.sh
-# MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) $(SHELL) $<
+installcheck:
+ $(prove_installcheck)
diff --git a/src/bin/pg_upgrade/TESTING b/src/bin/pg_upgrade/TESTING
index 4ecfc5798e..4885164d04 100644
--- a/src/bin/pg_upgrade/TESTING
+++ b/src/bin/pg_upgrade/TESTING
@@ -61,7 +61,7 @@ steps:
7) Diff the regression database dump file with the regression dump
file loaded into the old server.
-The shell script test.sh in this directory performs more or less this
+The TAP test scripts in this directory perform more or less this
procedure. You can invoke it by running
make check
@@ -69,13 +69,3 @@ procedure. You can invoke it by running
or by running
make installcheck
-
-if "make install" (or "make install-world") were done beforehand.
-When invoked without arguments, it will run an upgrade from the
-version in this source tree to a new instance of the same version. To
-test an upgrade from a different version, invoke it like this:
-
- make installcheck oldbindir=...otherversion/bin oldsrc=...somewhere/postgresql
-
-In this case, you will have to manually eyeball the resulting dump
-diff for version-specific differences, as explained above.
diff --git a/src/bin/pg_upgrade/t/010_pg_upgrade.pl b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
new file mode 100644
index 0000000000..765bd963bb
--- /dev/null
+++ b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
@@ -0,0 +1,91 @@
+# Set of tests for pg_upgrade.
+use strict;
+use warnings;
+use Cwd;
+use Config;
+use File::Basename;
+use IPC::Run;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 9;
+
+program_help_ok('pg_upgrade');
+program_version_ok('pg_upgrade');
+program_options_handling_ok('pg_upgrade');
+
+# Generate a database with a name made of a range of ASCII characters.
+sub generate_db
+{
+ my ($node, $from_char, $to_char) = @_;
+
+ my $dbname = '';
+ for my $i ($from_char .. $to_char)
+ {
+ next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
+ $dbname = $dbname . sprintf('%c', $i);
+ }
+ $node->run_log([ 'createdb', '--port', $node->port, $dbname ]);
+}
+
+my $startdir = getcwd();
+
+# From now on, the test of pg_upgrade consists in setting up an instance
+# on which regression tests are run. This is the source instance used
+# for the upgrade. Then a new, fresh instance is created, and is used
+# as the target instance for the upgrade. Before running an upgrade a
+# logical dump of the old instance is taken, and a second logical dump
+# of the new instance is taken after the upgrade. The upgrade test
+# passes if there are no differences after running pg_upgrade.
+
+# Temporary location for dumps taken
+my $tempdir = TestLib::tempdir;
+
+# Initialize node to upgrade
+my $oldnode = get_new_node('old_node');
+$oldnode->init;
+$oldnode->start;
+
+# Creating databases with names covering most ASCII bytes
+generate_db($oldnode, 1, 45);
+generate_db($oldnode, 46, 90);
+generate_db($oldnode, 91, 127);
+
+# Run regression tests on the old instance
+chdir dirname($ENV{PG_REGRESS});
+$oldnode->run_log([ 'createdb', '--port', $oldnode->port, 'regression' ]);
+run_log([$ENV{PG_REGRESS}, '--schedule', './serial_schedule',
+ '--dlpath', '.', '--bindir=', '--use-existing',
+ '--port', $oldnode->port]);
+
+# Take a dump before performing the upgrade as a base comparison.
+run_log(['pg_dumpall', '--port', $oldnode->port, '-f', "$tempdir/dump1.sql"]);
+
+# Move back to current directory, all logs generated need to be located
+# at the origin.
+chdir $startdir;
+
+# Update the instance.
+$oldnode->stop;
+
+# pg_upgrade needs the location of the old and new binaries. This test
+# relying on binaries being in PATH, so is pg_config. So fetch from it
+# the real binary location.
+my ($bindir, $stderr);
+my $result = IPC::Run::run [ 'pg_config', '--bindir' ], '>', \$bindir, '2>', \$stderr;
+chomp($bindir);
+
+# Initialize a new node for the upgrade.
+my $newnode = get_new_node('new_node');
+$newnode->init;
+
+# Time for the real run.
+run_log(['pg_upgrade', '-d', $oldnode->data_dir, -D, $newnode->data_dir,
+ '-b', $bindir, '-B', $bindir, '-p', $oldnode->port, '-P', $newnode->port]);
+$newnode->start;
+
+# Take a second dump on the upgraded instance.
+run_log(['pg_dumpall', '--port', $newnode->port, '-f', "$tempdir/dump2.sql"]);
+
+# Compare the two dumps, there should be no differences.
+command_ok(['diff', '-q', "$tempdir/dump1.sql", "$tempdir/dump2.sql"],
+ 'Old and new dump checks after pg_upgrade');
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
deleted file mode 100644
index cbc5259550..0000000000
--- a/src/bin/pg_upgrade/test.sh
+++ /dev/null
@@ -1,248 +0,0 @@
-#!/bin/sh
-
-# src/bin/pg_upgrade/test.sh
-#
-# Test driver for pg_upgrade. Initializes a new database cluster,
-# runs the regression tests (to put in some data), runs pg_dumpall,
-# runs pg_upgrade, runs pg_dumpall again, compares the dumps.
-#
-# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
-# Portions Copyright (c) 1994, Regents of the University of California
-
-set -e
-
-: ${MAKE=make}
-
-# Guard against parallel make issues (see comments in pg_regress.c)
-unset MAKEFLAGS
-unset MAKELEVEL
-
-# Run a given "initdb" binary and overlay the regression testing
-# authentication configuration.
-standard_initdb() {
- "$1" -N
- if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
- then
- cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
- fi
- ../../test/regress/pg_regress --config-auth "$PGDATA"
-}
-
-# Establish how the server will listen for connections
-testhost=`uname -s`
-
-case $testhost in
- MINGW*)
- LISTEN_ADDRESSES="localhost"
- PGHOST=localhost
- ;;
- *)
- LISTEN_ADDRESSES=""
- # Select a socket directory. The algorithm is from the "configure"
- # script; the outcome mimics pg_regress.c:make_temp_sockdir().
- PGHOST=$PG_REGRESS_SOCK_DIR
- if [ "x$PGHOST" = x ]; then
- {
- dir=`(umask 077 &&
- mktemp -d /tmp/pg_upgrade_check-XXXXXX) 2>/dev/null` &&
- [ -d "$dir" ]
- } ||
- {
- dir=/tmp/pg_upgrade_check-$$-$RANDOM
- (umask 077 && mkdir "$dir")
- } ||
- {
- echo "could not create socket temporary directory in \"/tmp\""
- exit 1
- }
-
- PGHOST=$dir
- trap 'rm -rf "$PGHOST"' 0
- trap 'exit 3' 1 2 13 15
- fi
- ;;
-esac
-
-POSTMASTER_OPTS="-F -c listen_addresses=$LISTEN_ADDRESSES -k \"$PGHOST\""
-export PGHOST
-
-# don't rely on $PWD here, as old shells don't set it
-temp_root=`pwd`/tmp_check
-
-if [ "$1" = '--install' ]; then
- temp_install=$temp_root/install
- bindir=$temp_install/$bindir
- libdir=$temp_install/$libdir
-
- "$MAKE" -s -C ../.. install DESTDIR="$temp_install"
-
- # platform-specific magic to find the shared libraries; see pg_regress.c
- LD_LIBRARY_PATH=$libdir:$LD_LIBRARY_PATH
- export LD_LIBRARY_PATH
- DYLD_LIBRARY_PATH=$libdir:$DYLD_LIBRARY_PATH
- export DYLD_LIBRARY_PATH
- LIBPATH=$libdir:$LIBPATH
- export LIBPATH
- SHLIB_PATH=$libdir:$SHLIB_PATH
- export SHLIB_PATH
- PATH=$libdir:$PATH
-
- # We need to make it use psql from our temporary installation,
- # because otherwise the installcheck run below would try to
- # use psql from the proper installation directory, which might
- # be outdated or missing. But don't override anything else that's
- # already in EXTRA_REGRESS_OPTS.
- EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --bindir='$bindir'"
- export EXTRA_REGRESS_OPTS
-fi
-
-: ${oldbindir=$bindir}
-
-: ${oldsrc=../../..}
-oldsrc=`cd "$oldsrc" && pwd`
-newsrc=`cd ../../.. && pwd`
-
-PATH=$bindir:$PATH
-export PATH
-
-BASE_PGDATA=$temp_root/data
-PGDATA="$BASE_PGDATA.old"
-export PGDATA
-rm -rf "$BASE_PGDATA" "$PGDATA"
-
-logdir=`pwd`/log
-rm -rf "$logdir"
-mkdir "$logdir"
-
-# Clear out any environment vars that might cause libpq to connect to
-# the wrong postmaster (cf pg_regress.c)
-#
-# Some shells, such as NetBSD's, return non-zero from unset if the variable
-# is already unset. Since we are operating under 'set -e', this causes the
-# script to fail. To guard against this, set them all to an empty string first.
-PGDATABASE=""; unset PGDATABASE
-PGUSER=""; unset PGUSER
-PGSERVICE=""; unset PGSERVICE
-PGSSLMODE=""; unset PGSSLMODE
-PGREQUIRESSL=""; unset PGREQUIRESSL
-PGCONNECT_TIMEOUT=""; unset PGCONNECT_TIMEOUT
-PGHOSTADDR=""; unset PGHOSTADDR
-
-# Select a non-conflicting port number, similarly to pg_regress.c
-PG_VERSION_NUM=`grep '#define PG_VERSION_NUM' "$newsrc"/src/include/pg_config.h | awk '{print $3}'`
-PGPORT=`expr $PG_VERSION_NUM % 16384 + 49152`
-export PGPORT
-
-i=0
-while psql -X postgres </dev/null 2>/dev/null
-do
- i=`expr $i + 1`
- if [ $i -eq 16 ]
- then
- echo port $PGPORT apparently in use
- exit 1
- fi
- PGPORT=`expr $PGPORT + 1`
- export PGPORT
-done
-
-# buildfarm may try to override port via EXTRA_REGRESS_OPTS ...
-EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --port=$PGPORT"
-export EXTRA_REGRESS_OPTS
-
-# enable echo so the user can see what is being executed
-set -x
-
-standard_initdb "$oldbindir"/initdb
-"$oldbindir"/pg_ctl start -l "$logdir/postmaster1.log" -o "$POSTMASTER_OPTS" -w
-
-# Create databases with names covering the ASCII bytes other than NUL, BEL,
-# LF, or CR. BEL would ring the terminal bell in the course of this test, and
-# it is not otherwise a special case. PostgreSQL doesn't support the rest.
-dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
- if (i != 7 && i != 10 && i != 13) printf "%c", i }' </dev/null`
-# Exercise backslashes adjacent to double quotes, a Windows special case.
-dbname1='\"\'$dbname1'\\"\\\'
-dbname2=`awk 'BEGIN { for (i = 46; i < 91; i++) printf "%c", i }' </dev/null`
-dbname3=`awk 'BEGIN { for (i = 91; i < 128; i++) printf "%c", i }' </dev/null`
-createdb "$dbname1" || createdb_status=$?
-createdb "$dbname2" || createdb_status=$?
-createdb "$dbname3" || createdb_status=$?
-
-if "$MAKE" -C "$oldsrc" installcheck; then
- pg_dumpall -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
- if [ "$newsrc" != "$oldsrc" ]; then
- oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%'; DROP FUNCTION public.myfunc(integer);"
- ;;
- 900??)
- fix_sql="SET bytea_output TO escape; UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%';"
- ;;
- 901??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
- ;;
- esac
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
- sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
- fi
-else
- make_installcheck_status=$?
-fi
-"$oldbindir"/pg_ctl -m fast stop
-if [ -n "$createdb_status" ]; then
- exit 1
-fi
-if [ -n "$make_installcheck_status" ]; then
- exit 1
-fi
-if [ -n "$psql_fix_sql_status" ]; then
- exit 1
-fi
-if [ -n "$pg_dumpall1_status" ]; then
- echo "pg_dumpall of pre-upgrade database cluster failed"
- exit 1
-fi
-
-PGDATA=$BASE_PGDATA
-
-standard_initdb 'initdb'
-
-pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "${PGDATA}" -b "$oldbindir" -B "$bindir" -p "$PGPORT" -P "$PGPORT"
-
-pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-
-case $testhost in
- MINGW*) cmd /c analyze_new_cluster.bat ;;
- *) sh ./analyze_new_cluster.sh ;;
-esac
-
-pg_dumpall -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
-pg_ctl -m fast stop
-
-# no need to echo commands anymore
-set +x
-echo
-
-if [ -n "$pg_dumpall2_status" ]; then
- echo "pg_dumpall of post-upgrade database cluster failed"
- exit 1
-fi
-
-case $testhost in
- MINGW*) cmd /c delete_old_cluster.bat ;;
- *) sh ./delete_old_cluster.sh ;;
-esac
-
-if diff "$temp_root"/dump1.sql "$temp_root"/dump2.sql >/dev/null; then
- echo PASSED
- exit 0
-else
- echo "Files $temp_root/dump1.sql and $temp_root/dump2.sql differ"
- echo "dumps were not identical"
- exit 1
-fi
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 8933920d9b..7d6fb6200a 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -34,7 +34,7 @@ if (-e "src/tools/msvc/buildenv.pl")
my $what = shift || "";
if ($what =~
-/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|upgradecheck|bincheck|recoverycheck)$/i
+/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|bincheck|recoverycheck)$/i
)
{
$what = uc $what;
@@ -89,8 +89,7 @@ my %command = (
MODULESCHECK => \&modulescheck,
ISOLATIONCHECK => \&isolationcheck,
BINCHECK => \&bincheck,
- RECOVERYCHECK => \&recoverycheck,
- UPGRADECHECK => \&upgradecheck,);
+ RECOVERYCHECK => \&recoverycheck,);
my $proc = $command{$what};
@@ -382,126 +381,6 @@ sub standard_initdb
$ENV{PGDATA}) == 0);
}
-# This is similar to appendShellString(). Perl system(@args) bypasses
-# cmd.exe, so omit the caret escape layer.
-sub quote_system_arg
-{
- my $arg = shift;
-
- # Change N >= 0 backslashes before a double quote to 2N+1 backslashes.
- $arg =~ s/(\\*)"/${\($1 . $1)}\\"/gs;
-
- # Change N >= 1 backslashes at end of argument to 2N backslashes.
- $arg =~ s/(\\+)$/${\($1 . $1)}/gs;
-
- # Wrap the whole thing in unescaped double quotes.
- return "\"$arg\"";
-}
-
-# Generate a database with a name made of a range of ASCII characters, useful
-# for testing pg_upgrade.
-sub generate_db
-{
- my ($prefix, $from_char, $to_char, $suffix) = @_;
-
- my $dbname = $prefix;
- for my $i ($from_char .. $to_char)
- {
- next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
- $dbname = $dbname . sprintf('%c', $i);
- }
- $dbname .= $suffix;
-
- system('createdb', quote_system_arg($dbname));
- my $status = $? >> 8;
- exit $status if $status;
-}
-
-sub upgradecheck
-{
- my $status;
- my $cwd = getcwd();
-
- # Much of this comes from the pg_upgrade test.sh script,
- # but it only covers the --install case, and not the case
- # where the old and new source or bin dirs are different.
- # i.e. only this version to this version check. That's
- # what pg_upgrade's "make check" does.
-
- $ENV{PGHOST} = 'localhost';
- $ENV{PGPORT} ||= 50432;
- my $tmp_root = "$topdir/src/bin/pg_upgrade/tmp_check";
- (mkdir $tmp_root || die $!) unless -d $tmp_root;
- my $upg_tmp_install = "$tmp_root/install"; # unshared temp install
- print "Setting up temp install\n\n";
- Install($upg_tmp_install, "all", $config);
-
- # Install does a chdir, so change back after that
- chdir $cwd;
- my ($bindir, $libdir, $oldsrc, $newsrc) =
- ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
- $ENV{PATH} = "$bindir;$ENV{PATH}";
- my $data = "$tmp_root/data";
- $ENV{PGDATA} = "$data.old";
- my $logdir = "$topdir/src/bin/pg_upgrade/log";
- (mkdir $logdir || die $!) unless -d $logdir;
- print "\nRunning initdb on old cluster\n\n";
- standard_initdb() or exit 1;
- print "\nStarting old cluster\n\n";
- my @args = ('pg_ctl', 'start', '-l', "$logdir/postmaster1.log");
- system(@args) == 0 or exit 1;
-
- print "\nCreating databases with names covering most ASCII bytes\n\n";
- generate_db("\\\"\\", 1, 45, "\\\\\"\\\\\\");
- generate_db('', 46, 90, '');
- generate_db('', 91, 127, '');
-
- print "\nSetting up data for upgrading\n\n";
- installcheck();
-
- # now we can chdir into the source dir
- chdir "$topdir/src/bin/pg_upgrade";
- print "\nDumping old cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump1.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping old cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- $ENV{PGDATA} = "$data";
- print "\nSetting up new cluster\n\n";
- standard_initdb() or exit 1;
- print "\nRunning pg_upgrade\n\n";
- @args = (
- 'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
- $bindir, '-B', $bindir);
- system(@args) == 0 or exit 1;
- print "\nStarting new cluster\n\n";
- @args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
- system(@args) == 0 or exit 1;
- print "\nSetting up stats on new cluster\n\n";
- system(".\\analyze_new_cluster.bat") == 0 or exit 1;
- print "\nDumping new cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump2.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping new cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- print "\nDeleting old cluster\n\n";
- system(".\\delete_old_cluster.bat") == 0 or exit 1;
- print "\nComparing old and new cluster dumps\n\n";
-
- @args = ('diff', '-q', "$tmp_root/dump1.sql", "$tmp_root/dump2.sql");
- system(@args);
- $status = $?;
- if (!$status)
- {
- print "PASSED\n";
- }
- else
- {
- print "dumps not identical!\n";
- exit(1);
- }
-}
-
sub fetchRegressOpts
{
my $handle;
@@ -607,7 +486,6 @@ sub usage
" modulescheck run tests of modules in src/test/modules/\n",
" plcheck run tests of PL languages\n",
" recoverycheck run recovery test suite\n",
- " upgradecheck run tests of pg_upgrade\n",
"\nOptions for <schedule>:\n",
" serial serial mode\n",
" parallel parallel mode\n";
On 3 April 2017 at 21:07, Michael Paquier <michael.paquier@gmail.com> wrote:
Hi all,
The $subject has been mentioned a couple of times already, the last
one being here:
/messages/by-id/20170401072814.GA2528448@tornado.leadboat.comThe code tree has to maintain now two set of scripts for the same
test: test.sh for all *nix platforms, and vcregress.pl for MSVC.
Attached is a patch to remove that and replace the existing test by a
TAP test. The size of the patch speaks by itself:
6 files changed, 98 insertions(+), 389 deletions(-)I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.I am parking that in the next commit fest.
Great.
Count me in as reviewer, and feel free to poke me if I get caught up
in other things.
I'd like to see us adopting TAP for cross-version stuff in pg_dump etc
too, down the track.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig, Michael,
* Craig Ringer (craig@2ndquadrant.com) wrote:
On 3 April 2017 at 21:07, Michael Paquier <michael.paquier@gmail.com> wrote:
I am parking that in the next commit fest.
Great.
Count me in as reviewer, and feel free to poke me if I get caught up
in other things.
I'm interested in this also.
I'd like to see us adopting TAP for cross-version stuff in pg_dump etc
too, down the track.
I'm very curious what you're thinking here? IIRC, Andrew had some ideas
for how to do true cross-version testing with TAP in the buildfarm, but
I don't think we actually have that yet..?
Thanks!
Stephen
On 4/3/17 09:07, Michael Paquier wrote:
I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.
This is a useful start. What I'd really like to see is that instead of
running the full serial tests to populate the pre-upgrade database, we
determine a useful subset of what that ends up generating and just
populate with that.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-04-03 11:22:02 -0400, Peter Eisentraut wrote:
On 4/3/17 09:07, Michael Paquier wrote:
I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.This is a useful start. What I'd really like to see is that instead of
running the full serial tests to populate the pre-upgrade database, we
determine a useful subset of what that ends up generating and just
populate with that.
That doesn't strike as particularly future proof. We intentionally
leave objects behind pg_regress runs, but that only works if we actually
run them...
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Peter,
* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:
On 4/3/17 09:07, Michael Paquier wrote:
I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.This is a useful start. What I'd really like to see is that instead of
running the full serial tests to populate the pre-upgrade database, we
determine a useful subset of what that ends up generating and just
populate with that.
In the past, we've had the notion that the regression tests are intended
to also cover pg_upgrade/pg_dump by "leaving things around". What I
found in my efforts to provide better coverage in pg_dump is that there
was quite a bit of coverage missing using that approach.
Perhaps that could be fixed, but I tend to think it's a better approach
to have a complete set of pg_upgrade/pg_dump tests in one place that
doesn't also have a bunch of other tests mixed in (and would also mean
that the regular regression tests could be 'clean').
I could also see us defining one set of commands to run which create
every type of object in the system that pg_dump understands and then
using that to perform the pg_dump and pg_upgrade tests. Those commands
would have to be annotated with minimum major version and maximum major
version, assuming we're going to use them cross-version, but that should
be reasonably straight-forward to do.
Another question is how much sense it makes to test this logic,
essentially, twice. The testing of pg_dump covers the pg_dump code,
which is what pg_upgrade uses anyway. The pg_upgrade tests really need
to cover the non-pg_dump-related parts, assuming we have appropriate
coverage in the pg_dump tests for the --binary-upgrade mode. Of course,
if we don't, then we should go about fixing that. There are certainly
some tests now but perhaps we need more or need to have improvmenets
made there.
Thanks!
Stephen
On 2017-04-03 11:34:52 -0400, Stephen Frost wrote:
Peter,
* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:
On 4/3/17 09:07, Michael Paquier wrote:
I had for some time a WIP patch on which dust has accumulated, so
attached is a more polished version. In more details, here is what
happens:
- test.sh is removed.
- vcregress.pl loses upgradecheck.
- The new test is added. In the case of MSVC this is now part of bincheck.
Patch has been tested on macos and Windows.This is a useful start. What I'd really like to see is that instead of
running the full serial tests to populate the pre-upgrade database, we
determine a useful subset of what that ends up generating and just
populate with that.In the past, we've had the notion that the regression tests are intended
to also cover pg_upgrade/pg_dump by "leaving things around". What I
found in my efforts to provide better coverage in pg_dump is that there
was quite a bit of coverage missing using that approach.Perhaps that could be fixed, but I tend to think it's a better approach
to have a complete set of pg_upgrade/pg_dump tests in one place that
doesn't also have a bunch of other tests mixed in (and would also mean
that the regular regression tests could be 'clean').I could also see us defining one set of commands to run which create
every type of object in the system that pg_dump understands and then
using that to perform the pg_dump and pg_upgrade tests. Those commands
would have to be annotated with minimum major version and maximum major
version, assuming we're going to use them cross-version, but that should
be reasonably straight-forward to do.Another question is how much sense it makes to test this logic,
essentially, twice. The testing of pg_dump covers the pg_dump code,
which is what pg_upgrade uses anyway. The pg_upgrade tests really need
to cover the non-pg_dump-related parts, assuming we have appropriate
coverage in the pg_dump tests for the --binary-upgrade mode. Of course,
if we don't, then we should go about fixing that. There are certainly
some tests now but perhaps we need more or need to have improvmenets
made there.
I don't fundamentally disagree with anything here, but I think it'd be a
serious mistake to link this to the conversion of the pg_upgrade tests
to tap tests.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres,
* Andres Freund (andres@anarazel.de) wrote:
I don't fundamentally disagree with anything here, but I think it'd be a
serious mistake to link this to the conversion of the pg_upgrade tests
to tap tests.
I agree that we should move forward with that conversion, regardless of
the rest of this discussion.
Thanks!
Stephen
On Tue, Apr 4, 2017 at 12:12 AM, Stephen Frost <sfrost@snowman.net> wrote:
I'm very curious what you're thinking here? IIRC, Andrew had some ideas
for how to do true cross-version testing with TAP in the buildfarm, but
I don't think we actually have that yet..?
I heard about nothing in this area. Cross-branch tests may be an
interesting challenge as tests written in branch X may not be in Y.
The patch presented here does lower the coverage we have now.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael,
On Mon, Apr 3, 2017 at 18:29 Michael Paquier <michael.paquier@gmail.com>
wrote:
On Tue, Apr 4, 2017 at 12:12 AM, Stephen Frost <sfrost@snowman.net> wrote:
I'm very curious what you're thinking here? IIRC, Andrew had some ideas
for how to do true cross-version testing with TAP in the buildfarm, but
I don't think we actually have that yet..?I heard about nothing in this area. Cross-branch tests may be an
interesting challenge as tests written in branch X may not be in Y.
The patch presented here does lower the coverage we have now.
Not good if it lowers the coverage, but hopefully that's fixable. Have you
analyzed where we're reducing coverage..?
As for what I'm remembering, there's this:
/messages/by-id/5669acd9-efdc-2a0f-afea-10ba6003a050@dunslane.net
Of course, it's possible I misunderstood..
That seems focused on upgrading and I'd really like to see a general way to
do this with the TAP structure, specifically so we can test pg_dump and
psql against older versions. Having the ability to then be run under the
coverage testing would be fantastic and would help a great deal with the
coverage report.
Thanks!
Stephen
On Tue, Apr 4, 2017 at 7:38 AM, Stephen Frost <sfrost@snowman.net> wrote:
Not good if it lowers the coverage, but hopefully that's fixable. Have you
analyzed where we're reducing coverage..?
The current set of tests is just running pg_upgrade using the same
version for the source and target instances. Based on that I am not
lowering what is happening in this set of tests. Just doing some
cleanup.
As for what I'm remembering, there's this:
/messages/by-id/5669acd9-efdc-2a0f-afea-10ba6003a050@dunslane.netOf course, it's possible I misunderstood..
This invokes directly pg_upgrade, so that's actually a third,
different way to test pg_upgrade on top of the two existing methods
that are used in vcregress.pl and pg_upgrade's test.sh
That seems focused on upgrading and I'd really like to see a general way to
do this with the TAP structure, specifically so we can test pg_dump and psql
against older versions. Having the ability to then be run under the
coverage testing would be fantastic and would help a great deal with the
coverage report.
I don't disagree with that. What we need first is some logic to store
in a temporary directory the installation of all the previous major
versions that we have. For example use a subfolder in tmp_install
tagged with the major version number, and then when the TAP test
starts we scan for all the versions present in tmp_install and test
the upgrade with a full grid. One issue though is that we add
$(bindir) in PATH and that there is currently no logic to change PATH
automatically depending on the major/minor versions you are working
on.
So in short I don't think that this lack of infrastructure should be a
barrier for what is basically a cleanup but... I just work here.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Tue, Apr 4, 2017 at 7:38 AM, Stephen Frost <sfrost@snowman.net> wrote:
Not good if it lowers the coverage, but hopefully that's fixable. Have you
analyzed where we're reducing coverage..?The current set of tests is just running pg_upgrade using the same
version for the source and target instances. Based on that I am not
lowering what is happening in this set of tests. Just doing some
cleanup.
Ok, I'm confused.
I wrote the above in response to your statement:
The patch presented here does lower the coverage we have now.
I assume (perhaps mistakenly) that this statement was based on an
analysis of before-and-after 'make coverage' runs. Here you are saying
that you're *not* lowering the coverage.
I understand how the current pg_upgrade tests work. I don't see
off-hand why the TAP tests would reduce the code coverage of pg_upgrade,
but if they do, we should be able to figure out why and correct it.
As for what I'm remembering, there's this:
/messages/by-id/5669acd9-efdc-2a0f-afea-10ba6003a050@dunslane.netOf course, it's possible I misunderstood..
This invokes directly pg_upgrade, so that's actually a third,
different way to test pg_upgrade on top of the two existing methods
that are used in vcregress.pl and pg_upgrade's test.sh
Ok, though I'm not sure that I see that as necessairly a bad thing.
There are only specific tools that we actually worry about being able to
work with older versions of PG, after all.
That seems focused on upgrading and I'd really like to see a general way to
do this with the TAP structure, specifically so we can test pg_dump and psql
against older versions. Having the ability to then be run under the
coverage testing would be fantastic and would help a great deal with the
coverage report.I don't disagree with that. What we need first is some logic to store
in a temporary directory the installation of all the previous major
versions that we have. For example use a subfolder in tmp_install
tagged with the major version number, and then when the TAP test
starts we scan for all the versions present in tmp_install and test
the upgrade with a full grid. One issue though is that we add
$(bindir) in PATH and that there is currently no logic to change PATH
automatically depending on the major/minor versions you are working
on.
Right, I figured that what Andrew did in the above post was something
along these lines, but I've not looked at it in any depth.
So in short I don't think that this lack of infrastructure should be a
barrier for what is basically a cleanup but... I just work here.
I didn't mean to imply that this patch needs to address the
cross-version testing challenge, was merely mentioning that there's been
some work in this area already by Andrew and that if you're interested
in working on that problem that you should probably coordinate with him.
What I do think is a barrier to this patch moving forward is if it
reduces our current code coverage testing (with the same-version
pg_upgrade that's run in the regular regression tests). If it doesn't,
then great, but if it does, then the patch should be updated to fix
that.
Thanks!
Stephen
On 4/3/17 11:32, Andres Freund wrote:
That doesn't strike as particularly future proof. We intentionally
leave objects behind pg_regress runs, but that only works if we actually
run them...
I generally agree with the sentiments expressed later in this thread.
But just to clarify what I meant here: We don't need to run a, say,
1-minute serial test to load a few "left behind" objects for the
pg_upgrade test, if we can load the same set of objects using dedicated
scripting in say 2 seconds. This would make both the pg_upgrade tests
faster and would reduce the hidden dependencies in the main tests about
which kinds of objects need to be left behind.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Apr 4, 2017 at 9:30 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Tue, Apr 4, 2017 at 7:38 AM, Stephen Frost <sfrost@snowman.net> wrote:
The patch presented here does lower the coverage we have now.I assume (perhaps mistakenly) that this statement was based on an
analysis of before-and-after 'make coverage' runs. Here you are saying
that you're *not* lowering the coverage.
My apologies here, when I used the work "coverage" in my previous
emails it visibly implied that I meant that I had used the coverage
stuff. But I did not because the TAP test proposed does exactly what
test.sh is doing:
1) Initialize the old cluster and start it.
2) create a bunch of databases with full range of ascii characters.
3) Run regression tests.
4) Take dump on old cluster.
4) Stop the old cluster.
5) Initialize the new cluster.
6) Run pg_upgrade.
7) Start new cluster.
8) Take dump from it.
9) Run deletion script (Oops forgot this part!)
I understand how the current pg_upgrade tests work. I don't see
off-hand why the TAP tests would reduce the code coverage of pg_upgrade,
but if they do, we should be able to figure out why and correct it.
Good news is that this patch at least does not lower the bar.
So in short I don't think that this lack of infrastructure should be a
barrier for what is basically a cleanup but... I just work here.I didn't mean to imply that this patch needs to address the
cross-version testing challenge, was merely mentioning that there's been
some work in this area already by Andrew and that if you're interested
in working on that problem that you should probably coordinate with him.
Sure.
What I do think is a barrier to this patch moving forward is if it
reduces our current code coverage testing (with the same-version
pg_upgrade that's run in the regular regression tests). If it doesn't,
then great, but if it does, then the patch should be updated to fix
that.
I did not do a coverage test first, but surely this patch needs
numbers, so here you go.
Without the patch, here is the coverage of src/bin/pg_upgrade:
lines......: 57.7% (1311 of 2273 lines)
functions..: 85.3% (87 of 102 functions)
And with the patch:
lines......: 58.8% (1349 of 2294 lines)
functions..: 85.6% (89 of 104 functions)
The coverage gets a bit higher as a couple of basic code paths like
pg_upgrade --help get looked at.
--
Michael
Attachments:
pgupgrade-tap-test-v2.patchapplication/octet-stream; name=pgupgrade-tap-test-v2.patchDownload
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index ecec0a60c7..b7651c5b33 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -452,7 +452,6 @@ $ENV{CONFIG}="Debug";
<userinput>vcregress isolationcheck</userinput>
<userinput>vcregress bincheck</userinput>
<userinput>vcregress recoverycheck</userinput>
-<userinput>vcregress upgradecheck</userinput>
</screen>
To change the schedule used (default is parallel), append it to the
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 8823288708..a78b39f3e5 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -35,9 +35,8 @@ clean distclean maintainer-clean:
pg_upgrade_dump_globals.sql \
pg_upgrade_dump_*.custom pg_upgrade_*.log
-check: test.sh all
- MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install
+check:
+ $(prove_check)
-# disabled because it upsets the build farm
-#installcheck: test.sh
-# MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) $(SHELL) $<
+installcheck:
+ $(prove_installcheck)
diff --git a/src/bin/pg_upgrade/TESTING b/src/bin/pg_upgrade/TESTING
index 4ecfc5798e..4885164d04 100644
--- a/src/bin/pg_upgrade/TESTING
+++ b/src/bin/pg_upgrade/TESTING
@@ -61,7 +61,7 @@ steps:
7) Diff the regression database dump file with the regression dump
file loaded into the old server.
-The shell script test.sh in this directory performs more or less this
+The TAP test scripts in this directory perform more or less this
procedure. You can invoke it by running
make check
@@ -69,13 +69,3 @@ procedure. You can invoke it by running
or by running
make installcheck
-
-if "make install" (or "make install-world") were done beforehand.
-When invoked without arguments, it will run an upgrade from the
-version in this source tree to a new instance of the same version. To
-test an upgrade from a different version, invoke it like this:
-
- make installcheck oldbindir=...otherversion/bin oldsrc=...somewhere/postgresql
-
-In this case, you will have to manually eyeball the resulting dump
-diff for version-specific differences, as explained above.
diff --git a/src/bin/pg_upgrade/t/010_pg_upgrade.pl b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
new file mode 100644
index 0000000000..9a956c96b3
--- /dev/null
+++ b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
@@ -0,0 +1,96 @@
+# Set of tests for pg_upgrade.
+use strict;
+use warnings;
+use Cwd;
+use Config;
+use File::Basename;
+use IPC::Run;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 10;
+
+program_help_ok('pg_upgrade');
+program_version_ok('pg_upgrade');
+program_options_handling_ok('pg_upgrade');
+
+# Generate a database with a name made of a range of ASCII characters.
+sub generate_db
+{
+ my ($node, $from_char, $to_char) = @_;
+
+ my $dbname = '';
+ for my $i ($from_char .. $to_char)
+ {
+ next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
+ $dbname = $dbname . sprintf('%c', $i);
+ }
+ $node->run_log(['createdb', '--port', $node->port, $dbname]);
+}
+
+my $startdir = getcwd();
+
+# From now on, the test of pg_upgrade consists in setting up an instance
+# on which regression tests are run. This is the source instance used
+# for the upgrade. Then a new, fresh instance is created, and is used
+# as the target instance for the upgrade. Before running an upgrade a
+# logical dump of the old instance is taken, and a second logical dump
+# of the new instance is taken after the upgrade. The upgrade test
+# passes if there are no differences after running pg_upgrade.
+
+# Temporary location for dumps taken
+my $tempdir = TestLib::tempdir;
+
+# Initialize node to upgrade
+my $oldnode = get_new_node('old_node');
+$oldnode->init;
+$oldnode->start;
+
+# Creating databases with names covering most ASCII bytes
+generate_db($oldnode, 1, 45);
+generate_db($oldnode, 46, 90);
+generate_db($oldnode, 91, 127);
+
+# Run regression tests on the old instance
+chdir dirname($ENV{PG_REGRESS});
+$oldnode->run_log(['createdb', '--port', $oldnode->port, 'regression']);
+#run_log([$ENV{PG_REGRESS}, '--schedule', './serial_schedule',
+# '--dlpath', '.', '--bindir=', '--use-existing',
+# '--port', $oldnode->port]);
+
+# Take a dump before performing the upgrade as a base comparison.
+run_log(['pg_dumpall', '--port', $oldnode->port, '-f', "$tempdir/dump1.sql"]);
+
+# Move back to current directory, all logs generated need to be located
+# at the origin.
+chdir $startdir;
+
+# Update the instance.
+$oldnode->stop;
+
+# pg_upgrade needs the location of the old and new binaries. This test
+# relying on binaries being in PATH, so is pg_config. So fetch from it
+# the real binary location.
+my ($bindir, $stderr);
+my $result = IPC::Run::run [ 'pg_config', '--bindir' ], '>', \$bindir, '2>', \$stderr;
+chomp($bindir);
+
+# Initialize a new node for the upgrade.
+my $newnode = get_new_node('new_node');
+$newnode->init;
+
+# Time for the real run.
+run_log(['pg_upgrade', '-d', $oldnode->data_dir, -D, $newnode->data_dir,
+ '-b', $bindir, '-B', $bindir, '-p', $oldnode->port, '-P', $newnode->port]);
+$newnode->start;
+
+# Take a second dump on the upgraded instance.
+run_log(['pg_dumpall', '--port', $newnode->port, '-f', "$tempdir/dump2.sql"]);
+
+# Cleanup the old cluster data.
+my $delete_script = $TestLib::windows_os ? "$startdir/delete_old_cluster.bat" :
+ "$startdir/delete_old_cluster.sh";
+command_ok([$delete_script], "Deletion of old cluster data");
+
+# Compare the two dumps, there should be no differences.
+command_ok(['diff', '-q', "$tempdir/dump1.sql", "$tempdir/dump2.sql"],
+ 'Old and new dump checks after pg_upgrade');
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
deleted file mode 100644
index cbc5259550..0000000000
--- a/src/bin/pg_upgrade/test.sh
+++ /dev/null
@@ -1,248 +0,0 @@
-#!/bin/sh
-
-# src/bin/pg_upgrade/test.sh
-#
-# Test driver for pg_upgrade. Initializes a new database cluster,
-# runs the regression tests (to put in some data), runs pg_dumpall,
-# runs pg_upgrade, runs pg_dumpall again, compares the dumps.
-#
-# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
-# Portions Copyright (c) 1994, Regents of the University of California
-
-set -e
-
-: ${MAKE=make}
-
-# Guard against parallel make issues (see comments in pg_regress.c)
-unset MAKEFLAGS
-unset MAKELEVEL
-
-# Run a given "initdb" binary and overlay the regression testing
-# authentication configuration.
-standard_initdb() {
- "$1" -N
- if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
- then
- cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
- fi
- ../../test/regress/pg_regress --config-auth "$PGDATA"
-}
-
-# Establish how the server will listen for connections
-testhost=`uname -s`
-
-case $testhost in
- MINGW*)
- LISTEN_ADDRESSES="localhost"
- PGHOST=localhost
- ;;
- *)
- LISTEN_ADDRESSES=""
- # Select a socket directory. The algorithm is from the "configure"
- # script; the outcome mimics pg_regress.c:make_temp_sockdir().
- PGHOST=$PG_REGRESS_SOCK_DIR
- if [ "x$PGHOST" = x ]; then
- {
- dir=`(umask 077 &&
- mktemp -d /tmp/pg_upgrade_check-XXXXXX) 2>/dev/null` &&
- [ -d "$dir" ]
- } ||
- {
- dir=/tmp/pg_upgrade_check-$$-$RANDOM
- (umask 077 && mkdir "$dir")
- } ||
- {
- echo "could not create socket temporary directory in \"/tmp\""
- exit 1
- }
-
- PGHOST=$dir
- trap 'rm -rf "$PGHOST"' 0
- trap 'exit 3' 1 2 13 15
- fi
- ;;
-esac
-
-POSTMASTER_OPTS="-F -c listen_addresses=$LISTEN_ADDRESSES -k \"$PGHOST\""
-export PGHOST
-
-# don't rely on $PWD here, as old shells don't set it
-temp_root=`pwd`/tmp_check
-
-if [ "$1" = '--install' ]; then
- temp_install=$temp_root/install
- bindir=$temp_install/$bindir
- libdir=$temp_install/$libdir
-
- "$MAKE" -s -C ../.. install DESTDIR="$temp_install"
-
- # platform-specific magic to find the shared libraries; see pg_regress.c
- LD_LIBRARY_PATH=$libdir:$LD_LIBRARY_PATH
- export LD_LIBRARY_PATH
- DYLD_LIBRARY_PATH=$libdir:$DYLD_LIBRARY_PATH
- export DYLD_LIBRARY_PATH
- LIBPATH=$libdir:$LIBPATH
- export LIBPATH
- SHLIB_PATH=$libdir:$SHLIB_PATH
- export SHLIB_PATH
- PATH=$libdir:$PATH
-
- # We need to make it use psql from our temporary installation,
- # because otherwise the installcheck run below would try to
- # use psql from the proper installation directory, which might
- # be outdated or missing. But don't override anything else that's
- # already in EXTRA_REGRESS_OPTS.
- EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --bindir='$bindir'"
- export EXTRA_REGRESS_OPTS
-fi
-
-: ${oldbindir=$bindir}
-
-: ${oldsrc=../../..}
-oldsrc=`cd "$oldsrc" && pwd`
-newsrc=`cd ../../.. && pwd`
-
-PATH=$bindir:$PATH
-export PATH
-
-BASE_PGDATA=$temp_root/data
-PGDATA="$BASE_PGDATA.old"
-export PGDATA
-rm -rf "$BASE_PGDATA" "$PGDATA"
-
-logdir=`pwd`/log
-rm -rf "$logdir"
-mkdir "$logdir"
-
-# Clear out any environment vars that might cause libpq to connect to
-# the wrong postmaster (cf pg_regress.c)
-#
-# Some shells, such as NetBSD's, return non-zero from unset if the variable
-# is already unset. Since we are operating under 'set -e', this causes the
-# script to fail. To guard against this, set them all to an empty string first.
-PGDATABASE=""; unset PGDATABASE
-PGUSER=""; unset PGUSER
-PGSERVICE=""; unset PGSERVICE
-PGSSLMODE=""; unset PGSSLMODE
-PGREQUIRESSL=""; unset PGREQUIRESSL
-PGCONNECT_TIMEOUT=""; unset PGCONNECT_TIMEOUT
-PGHOSTADDR=""; unset PGHOSTADDR
-
-# Select a non-conflicting port number, similarly to pg_regress.c
-PG_VERSION_NUM=`grep '#define PG_VERSION_NUM' "$newsrc"/src/include/pg_config.h | awk '{print $3}'`
-PGPORT=`expr $PG_VERSION_NUM % 16384 + 49152`
-export PGPORT
-
-i=0
-while psql -X postgres </dev/null 2>/dev/null
-do
- i=`expr $i + 1`
- if [ $i -eq 16 ]
- then
- echo port $PGPORT apparently in use
- exit 1
- fi
- PGPORT=`expr $PGPORT + 1`
- export PGPORT
-done
-
-# buildfarm may try to override port via EXTRA_REGRESS_OPTS ...
-EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --port=$PGPORT"
-export EXTRA_REGRESS_OPTS
-
-# enable echo so the user can see what is being executed
-set -x
-
-standard_initdb "$oldbindir"/initdb
-"$oldbindir"/pg_ctl start -l "$logdir/postmaster1.log" -o "$POSTMASTER_OPTS" -w
-
-# Create databases with names covering the ASCII bytes other than NUL, BEL,
-# LF, or CR. BEL would ring the terminal bell in the course of this test, and
-# it is not otherwise a special case. PostgreSQL doesn't support the rest.
-dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
- if (i != 7 && i != 10 && i != 13) printf "%c", i }' </dev/null`
-# Exercise backslashes adjacent to double quotes, a Windows special case.
-dbname1='\"\'$dbname1'\\"\\\'
-dbname2=`awk 'BEGIN { for (i = 46; i < 91; i++) printf "%c", i }' </dev/null`
-dbname3=`awk 'BEGIN { for (i = 91; i < 128; i++) printf "%c", i }' </dev/null`
-createdb "$dbname1" || createdb_status=$?
-createdb "$dbname2" || createdb_status=$?
-createdb "$dbname3" || createdb_status=$?
-
-if "$MAKE" -C "$oldsrc" installcheck; then
- pg_dumpall -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
- if [ "$newsrc" != "$oldsrc" ]; then
- oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%'; DROP FUNCTION public.myfunc(integer);"
- ;;
- 900??)
- fix_sql="SET bytea_output TO escape; UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%';"
- ;;
- 901??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
- ;;
- esac
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
- sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
- fi
-else
- make_installcheck_status=$?
-fi
-"$oldbindir"/pg_ctl -m fast stop
-if [ -n "$createdb_status" ]; then
- exit 1
-fi
-if [ -n "$make_installcheck_status" ]; then
- exit 1
-fi
-if [ -n "$psql_fix_sql_status" ]; then
- exit 1
-fi
-if [ -n "$pg_dumpall1_status" ]; then
- echo "pg_dumpall of pre-upgrade database cluster failed"
- exit 1
-fi
-
-PGDATA=$BASE_PGDATA
-
-standard_initdb 'initdb'
-
-pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "${PGDATA}" -b "$oldbindir" -B "$bindir" -p "$PGPORT" -P "$PGPORT"
-
-pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-
-case $testhost in
- MINGW*) cmd /c analyze_new_cluster.bat ;;
- *) sh ./analyze_new_cluster.sh ;;
-esac
-
-pg_dumpall -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
-pg_ctl -m fast stop
-
-# no need to echo commands anymore
-set +x
-echo
-
-if [ -n "$pg_dumpall2_status" ]; then
- echo "pg_dumpall of post-upgrade database cluster failed"
- exit 1
-fi
-
-case $testhost in
- MINGW*) cmd /c delete_old_cluster.bat ;;
- *) sh ./delete_old_cluster.sh ;;
-esac
-
-if diff "$temp_root"/dump1.sql "$temp_root"/dump2.sql >/dev/null; then
- echo PASSED
- exit 0
-else
- echo "Files $temp_root/dump1.sql and $temp_root/dump2.sql differ"
- echo "dumps were not identical"
- exit 1
-fi
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 8933920d9b..7d6fb6200a 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -34,7 +34,7 @@ if (-e "src/tools/msvc/buildenv.pl")
my $what = shift || "";
if ($what =~
-/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|upgradecheck|bincheck|recoverycheck)$/i
+/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|bincheck|recoverycheck)$/i
)
{
$what = uc $what;
@@ -89,8 +89,7 @@ my %command = (
MODULESCHECK => \&modulescheck,
ISOLATIONCHECK => \&isolationcheck,
BINCHECK => \&bincheck,
- RECOVERYCHECK => \&recoverycheck,
- UPGRADECHECK => \&upgradecheck,);
+ RECOVERYCHECK => \&recoverycheck,);
my $proc = $command{$what};
@@ -382,126 +381,6 @@ sub standard_initdb
$ENV{PGDATA}) == 0);
}
-# This is similar to appendShellString(). Perl system(@args) bypasses
-# cmd.exe, so omit the caret escape layer.
-sub quote_system_arg
-{
- my $arg = shift;
-
- # Change N >= 0 backslashes before a double quote to 2N+1 backslashes.
- $arg =~ s/(\\*)"/${\($1 . $1)}\\"/gs;
-
- # Change N >= 1 backslashes at end of argument to 2N backslashes.
- $arg =~ s/(\\+)$/${\($1 . $1)}/gs;
-
- # Wrap the whole thing in unescaped double quotes.
- return "\"$arg\"";
-}
-
-# Generate a database with a name made of a range of ASCII characters, useful
-# for testing pg_upgrade.
-sub generate_db
-{
- my ($prefix, $from_char, $to_char, $suffix) = @_;
-
- my $dbname = $prefix;
- for my $i ($from_char .. $to_char)
- {
- next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
- $dbname = $dbname . sprintf('%c', $i);
- }
- $dbname .= $suffix;
-
- system('createdb', quote_system_arg($dbname));
- my $status = $? >> 8;
- exit $status if $status;
-}
-
-sub upgradecheck
-{
- my $status;
- my $cwd = getcwd();
-
- # Much of this comes from the pg_upgrade test.sh script,
- # but it only covers the --install case, and not the case
- # where the old and new source or bin dirs are different.
- # i.e. only this version to this version check. That's
- # what pg_upgrade's "make check" does.
-
- $ENV{PGHOST} = 'localhost';
- $ENV{PGPORT} ||= 50432;
- my $tmp_root = "$topdir/src/bin/pg_upgrade/tmp_check";
- (mkdir $tmp_root || die $!) unless -d $tmp_root;
- my $upg_tmp_install = "$tmp_root/install"; # unshared temp install
- print "Setting up temp install\n\n";
- Install($upg_tmp_install, "all", $config);
-
- # Install does a chdir, so change back after that
- chdir $cwd;
- my ($bindir, $libdir, $oldsrc, $newsrc) =
- ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
- $ENV{PATH} = "$bindir;$ENV{PATH}";
- my $data = "$tmp_root/data";
- $ENV{PGDATA} = "$data.old";
- my $logdir = "$topdir/src/bin/pg_upgrade/log";
- (mkdir $logdir || die $!) unless -d $logdir;
- print "\nRunning initdb on old cluster\n\n";
- standard_initdb() or exit 1;
- print "\nStarting old cluster\n\n";
- my @args = ('pg_ctl', 'start', '-l', "$logdir/postmaster1.log");
- system(@args) == 0 or exit 1;
-
- print "\nCreating databases with names covering most ASCII bytes\n\n";
- generate_db("\\\"\\", 1, 45, "\\\\\"\\\\\\");
- generate_db('', 46, 90, '');
- generate_db('', 91, 127, '');
-
- print "\nSetting up data for upgrading\n\n";
- installcheck();
-
- # now we can chdir into the source dir
- chdir "$topdir/src/bin/pg_upgrade";
- print "\nDumping old cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump1.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping old cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- $ENV{PGDATA} = "$data";
- print "\nSetting up new cluster\n\n";
- standard_initdb() or exit 1;
- print "\nRunning pg_upgrade\n\n";
- @args = (
- 'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
- $bindir, '-B', $bindir);
- system(@args) == 0 or exit 1;
- print "\nStarting new cluster\n\n";
- @args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
- system(@args) == 0 or exit 1;
- print "\nSetting up stats on new cluster\n\n";
- system(".\\analyze_new_cluster.bat") == 0 or exit 1;
- print "\nDumping new cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump2.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping new cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- print "\nDeleting old cluster\n\n";
- system(".\\delete_old_cluster.bat") == 0 or exit 1;
- print "\nComparing old and new cluster dumps\n\n";
-
- @args = ('diff', '-q', "$tmp_root/dump1.sql", "$tmp_root/dump2.sql");
- system(@args);
- $status = $?;
- if (!$status)
- {
- print "PASSED\n";
- }
- else
- {
- print "dumps not identical!\n";
- exit(1);
- }
-}
-
sub fetchRegressOpts
{
my $handle;
@@ -607,7 +486,6 @@ sub usage
" modulescheck run tests of modules in src/test/modules/\n",
" plcheck run tests of PL languages\n",
" recoverycheck run recovery test suite\n",
- " upgradecheck run tests of pg_upgrade\n",
"\nOptions for <schedule>:\n",
" serial serial mode\n",
" parallel parallel mode\n";
On Tue, Apr 4, 2017 at 10:52 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
On 4/3/17 11:32, Andres Freund wrote:
That doesn't strike as particularly future proof. We intentionally
leave objects behind pg_regress runs, but that only works if we actually
run them...I generally agree with the sentiments expressed later in this thread.
But just to clarify what I meant here: We don't need to run a, say,
1-minute serial test to load a few "left behind" objects for the
pg_upgrade test, if we can load the same set of objects using dedicated
scripting in say 2 seconds. This would make both the pg_upgrade tests
faster and would reduce the hidden dependencies in the main tests about
which kinds of objects need to be left behind.
Making the tests run shorter while maintaining the current code
coverage is nice. But this makes more complicated the test suite
maintenance as this needs either a dedicated regression schedule or an
extra test suite where objects are created just for the sake of
pg_upgrade. This increases the risks of getting a rotten test suite
with the time if patch makers and reviewers are not careful.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael,
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Tue, Apr 4, 2017 at 9:30 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Tue, Apr 4, 2017 at 7:38 AM, Stephen Frost <sfrost@snowman.net> wrote:
The patch presented here does lower the coverage we have now.I assume (perhaps mistakenly) that this statement was based on an
analysis of before-and-after 'make coverage' runs. Here you are saying
that you're *not* lowering the coverage.My apologies here, when I used the work "coverage" in my previous
emails it visibly implied that I meant that I had used the coverage
stuff. But I did not because the TAP test proposed does exactly what
test.sh is doing:
Ah, ok, no worries. Glad to hear that there isn't any difference in
coverage or in what's being done.
1) Initialize the old cluster and start it.
2) create a bunch of databases with full range of ascii characters.
3) Run regression tests.
4) Take dump on old cluster.
4) Stop the old cluster.
5) Initialize the new cluster.
6) Run pg_upgrade.
7) Start new cluster.
8) Take dump from it.
9) Run deletion script (Oops forgot this part!)
Presumably the check to match the old dump against the new one is also
performed?
I understand how the current pg_upgrade tests work. I don't see
off-hand why the TAP tests would reduce the code coverage of pg_upgrade,
but if they do, we should be able to figure out why and correct it.Good news is that this patch at least does not lower the bar.
Great, then I don't see any reason we can't move forward with it.
What I do think is a barrier to this patch moving forward is if it
reduces our current code coverage testing (with the same-version
pg_upgrade that's run in the regular regression tests). If it doesn't,
then great, but if it does, then the patch should be updated to fix
that.I did not do a coverage test first, but surely this patch needs
numbers, so here you go.Without the patch, here is the coverage of src/bin/pg_upgrade:
lines......: 57.7% (1311 of 2273 lines)
functions..: 85.3% (87 of 102 functions)And with the patch:
lines......: 58.8% (1349 of 2294 lines)
functions..: 85.6% (89 of 104 functions)
The coverage gets a bit higher as a couple of basic code paths like
pg_upgrade --help get looked at.
Fantastic, that's even better.
Thanks!
Stephen
Michael,
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Tue, Apr 4, 2017 at 10:52 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:On 4/3/17 11:32, Andres Freund wrote:
That doesn't strike as particularly future proof. We intentionally
leave objects behind pg_regress runs, but that only works if we actually
run them...I generally agree with the sentiments expressed later in this thread.
But just to clarify what I meant here: We don't need to run a, say,
1-minute serial test to load a few "left behind" objects for the
pg_upgrade test, if we can load the same set of objects using dedicated
scripting in say 2 seconds. This would make both the pg_upgrade tests
faster and would reduce the hidden dependencies in the main tests about
which kinds of objects need to be left behind.Making the tests run shorter while maintaining the current code
coverage is nice. But this makes more complicated the test suite
maintenance as this needs either a dedicated regression schedule or an
extra test suite where objects are created just for the sake of
pg_upgrade. This increases the risks of getting a rotten test suite
with the time if patch makers and reviewers are not careful.
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.
I'm still not completely convinced that we actually need to
independently test pg_upgrade by creating all the objects which the
pg_dump TAP tests do, given that pg_upgrade just runs pg_dump
underneath. If we really want to do that, however, what we should do is
abstract out the pg_dump set of tests into a place that both the pg_dump
and pg_upgrade TAP tests could use them to create all the types of
objects which are supported to perform their tests.
Thanks!
Stephen
Stephen Frost <sfrost@snowman.net> writes:
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.
Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.
I'm still not completely convinced that we actually need to
independently test pg_upgrade by creating all the objects which the
pg_dump TAP tests do, given that pg_upgrade just runs pg_dump
underneath. If we really want to do that, however, what we should do is
abstract out the pg_dump set of tests into a place that both the pg_dump
and pg_upgrade TAP tests could use them to create all the types of
objects which are supported to perform their tests.
I think it's largely pointless to test pg_dump --binary-upgrade except
as a part of pg_upgrade.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.
We don't need to only create sterile sets of objects in the pg_dump TAP
tests. I don't believe we need to populate GIN indexes or vacuum them
to test pg_dump/pg_upgrade either (at least, not if we're going to stick
to the pg_upgrade test basically being if pg_dump returns the same
results before-and-after).
I'm all for adding tests into pg_dump which do things like drop columns
and change column names and other cases which could impact if the
pg_dump is correct or not, and there's nothing preventing those tests
from being added in the existing structure. Certainly, before we remove
the coverage provided by running the serial test suite and then using
pg_upgrade, we should analyze what is being tested and ensure that we're
providing that same set of testing in the pg_dump TAP tests.
I'm still not completely convinced that we actually need to
independently test pg_upgrade by creating all the objects which the
pg_dump TAP tests do, given that pg_upgrade just runs pg_dump
underneath. If we really want to do that, however, what we should do is
abstract out the pg_dump set of tests into a place that both the pg_dump
and pg_upgrade TAP tests could use them to create all the types of
objects which are supported to perform their tests.I think it's largely pointless to test pg_dump --binary-upgrade except
as a part of pg_upgrade.
That's how I discovered that comments and security labels weren't being
pulled through to the new cluster for blobs, so I would have to disagree
with this. Frankly, it's also much more straight-forward to run
pg_dump --binary-upgrade than it is to get pg_upgrade to do the same.
Still, I'm not actually against centralizing the tests done with pg_dump
such that they could be used by pg_upgrade also. Creating all those
objects takes less than a second, at least on my system, so it would
still be quite a bit faster than running the serial regression suite.
We might also consider if there's a way to change the format for those
tests to make them a bit less impenetrable for non-Perl folks to work
with and to make it simpler to add new tests as new features are added.
Thanks!
Stephen
Hi,
On 2017-04-05 10:40:41 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.We don't need to only create sterile sets of objects in the pg_dump TAP
tests.
I really, really don't understand why we're conflating making pg_upgrade
tests less fragile / duplicative with changing what we use to test it.
This seems to have the sole result that we're not going to get anywhere.
I don't believe we need to populate GIN indexes or vacuum them
to test pg_dump/pg_upgrade either (at least, not if we're going to stick
to the pg_upgrade test basically being if pg_dump returns the same
results before-and-after).
I think we *should* have populated GIN indexes. Yes, the coverage isn't
perfect, but the VACUUM definitely gives a decent amount of coverage
whether the gin index looks halfway sane after the upgrade.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres,
* Andres Freund (andres@anarazel.de) wrote:
On 2017-04-05 10:40:41 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.We don't need to only create sterile sets of objects in the pg_dump TAP
tests.I really, really don't understand why we're conflating making pg_upgrade
tests less fragile / duplicative with changing what we use to test it.
This seems to have the sole result that we're not going to get anywhere.
Probably because the point was brought up that the regression tests for
pg_upgrade spend a bunch of time doing something which, ultimately,
don't actually add any real value. Yes, there are bits of the core
regression tests that currently add value over what we have through
other approaches, but that's not where the bulk of running those tests
go.
I don't believe we need to populate GIN indexes or vacuum them
to test pg_dump/pg_upgrade either (at least, not if we're going to stick
to the pg_upgrade test basically being if pg_dump returns the same
results before-and-after).I think we *should* have populated GIN indexes. Yes, the coverage isn't
perfect, but the VACUUM definitely gives a decent amount of coverage
whether the gin index looks halfway sane after the upgrade.
We don't look at the gin index after the upgrade in the current
pg_upgrade testing, so I don't see why you feel it's at all valuable.
If we *did* do that (and I'm all for adding such tests), then perhaps
this argument would make sense, but we don't today and I haven't seen
anyone propose changing that.
Thanks!
Stephen
On 2017-04-05 10:50:19 -0400, Stephen Frost wrote:
Andres,
* Andres Freund (andres@anarazel.de) wrote:
On 2017-04-05 10:40:41 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I believe that what Peter was getting at is that the pg_dump TAP tests
create a whole slew of objects in just a few seconds and are able to
then exercise those code-paths in pg_dump, without needing to run the
entire serial regression test run.Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.We don't need to only create sterile sets of objects in the pg_dump TAP
tests.I really, really don't understand why we're conflating making pg_upgrade
tests less fragile / duplicative with changing what we use to test it.
This seems to have the sole result that we're not going to get anywhere.Probably because the point was brought up that the regression tests for
pg_upgrade spend a bunch of time doing something which, ultimately,
don't actually add any real value. Yes, there are bits of the core
regression tests that currently add value over what we have through
other approaches, but that's not where the bulk of running those tests
go.
Create a separate patch [& thread] about that, don't conflate the
topics. I'm very much in favor of this rewrite, I'm very much not in
favor of only using some targeted testsuite. By combining two
independent changes, you're just making it less likely that anything
happens.
I don't believe we need to populate GIN indexes or vacuum them
to test pg_dump/pg_upgrade either (at least, not if we're going to stick
to the pg_upgrade test basically being if pg_dump returns the same
results before-and-after).I think we *should* have populated GIN indexes. Yes, the coverage isn't
perfect, but the VACUUM definitely gives a decent amount of coverage
whether the gin index looks halfway sane after the upgrade.We don't look at the gin index after the upgrade in the current
pg_upgrade testing, so I don't see why you feel it's at all valuable.
It's be trivial to add a VACUUM to the point where analyze_new_cluster
is currently run. And I've previously run more manual tests. Is that
perfect - no, definitely not.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost <sfrost@snowman.net> writes:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Right. But there's a certain amount of serendipity involved in using the
core regression tests' final results. For example, I don't know how long
it would've taken us to understand the problems around dumping and
reloading child tables with inconsistent column orders, had there not been
examples of that in the regression tests. I worry that creating a sterile
set of objects for testing pg_dump will leave blind spots, because it will
mean that we only test cases that we explicitly created test cases for.
I'm all for adding tests into pg_dump which do things like drop columns
and change column names and other cases which could impact if the
pg_dump is correct or not, and there's nothing preventing those tests
from being added in the existing structure. Certainly, before we remove
the coverage provided by running the serial test suite and then using
pg_upgrade, we should analyze what is being tested and ensure that we're
providing that same set of testing in the pg_dump TAP tests.
I don't think you grasped my basic point, which is that I'm worried about
emergent cases that we don't foresee needing to test (and that no amount
of code coverage checking would have shown up as being overlooked).
Admittedly, relying on the core regression tests to trigger such cases is
a pretty haphazard strategy, but it's way better than no strategy at all.
We might also consider if there's a way to change the format for those
tests to make them a bit less impenetrable for non-Perl folks to work
with and to make it simpler to add new tests as new features are added.
TBH, that's part of my allergy to this concept, ie that this test
mechanism seems pretty write-only. I do not think that people will add
pg_dump test cases except when required to by project policy, so that
we will end up with a very skeletal set of tests that won't find any
unforeseen behaviors.
The TAP tests in general are utterly developer-unfriendly from where
I sit: not only is the code pretty unreadable, but god help you when
you need to try to debug a failure. I think that some serious effort
needs to be spent on improving that situation before we imagine that
we can throw away other test mechanisms we have today in favor of
TAP tests.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres,
* Andres Freund (andres@anarazel.de) wrote:
On 2017-04-05 10:50:19 -0400, Stephen Frost wrote:
Probably because the point was brought up that the regression tests for
pg_upgrade spend a bunch of time doing something which, ultimately,
don't actually add any real value. Yes, there are bits of the core
regression tests that currently add value over what we have through
other approaches, but that's not where the bulk of running those tests
go.Create a separate patch [& thread] about that, don't conflate the
topics. I'm very much in favor of this rewrite, I'm very much not in
favor of only using some targeted testsuite. By combining two
independent changes, you're just making it less likely that anything
happens.
I've made it clear, I thought, a couple of times that I agree with the
rewrite and that we should move forward with it. Nothing on this
sub-thread changes that. It's also registered in the 2017-07
commitfest, so I wouldn't think that there's a risk of it being
forgotten or that we need to cut off all discussion about what may
change between now and July that would be relevant to this patch.
We don't look at the gin index after the upgrade in the current
pg_upgrade testing, so I don't see why you feel it's at all valuable.It's be trivial to add a VACUUM to the point where analyze_new_cluster
is currently run. And I've previously run more manual tests. Is that
perfect - no, definitely not.
Being trivial doesn't mean it's something we're actually doing today.
Given that we aren't actually changing anything in the index during a
same-version pg_upgrade, nor are we changing the code that's run by
that VACUUM, I'm curious just what we're ending up testing that's
different from just restarting the existing cluster and running a new
VACUUM.
Thanks!
Stephen
Tom,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I'm all for adding tests into pg_dump which do things like drop columns
and change column names and other cases which could impact if the
pg_dump is correct or not, and there's nothing preventing those tests
from being added in the existing structure. Certainly, before we remove
the coverage provided by running the serial test suite and then using
pg_upgrade, we should analyze what is being tested and ensure that we're
providing that same set of testing in the pg_dump TAP tests.I don't think you grasped my basic point, which is that I'm worried about
emergent cases that we don't foresee needing to test (and that no amount
of code coverage checking would have shown up as being overlooked).
Admittedly, relying on the core regression tests to trigger such cases is
a pretty haphazard strategy, but it's way better than no strategy at all.
The tests that were added to the core regression suite were done so for
a reason and hopefully we can identify cases where it'd make sense to
also run those tests for pg_upgrade/pg_dump testing. More-or-less
anything that materially changes the catalog should be included, I would
think. Things that are only really only working with the heap/index
files don't really need to be performed because the pg_upgrade process
doesn't change those.
We might also consider if there's a way to change the format for those
tests to make them a bit less impenetrable for non-Perl folks to work
with and to make it simpler to add new tests as new features are added.TBH, that's part of my allergy to this concept, ie that this test
mechanism seems pretty write-only. I do not think that people will add
pg_dump test cases except when required to by project policy, so that
we will end up with a very skeletal set of tests that won't find any
unforeseen behaviors.
I certainly agree that the current structure for the tests isn't trivial
to work with and would welcome suggestions as to how to improve it. Now
that we've had this testing structure for a year and have added quite a
bit more to it, it's definitely clear that we need to find a more
developer-friendly approach.
The TAP tests in general are utterly developer-unfriendly from where
I sit: not only is the code pretty unreadable, but god help you when
you need to try to debug a failure. I think that some serious effort
needs to be spent on improving that situation before we imagine that
we can throw away other test mechanisms we have today in favor of
TAP tests.
Agreed.
Thanks!
Stephen
On Wed, Apr 5, 2017 at 10:24 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Michael Paquier (michael.paquier@gmail.com) wrote:
1) Initialize the old cluster and start it.
2) create a bunch of databases with full range of ascii characters.
3) Run regression tests.
4) Take dump on old cluster.
4) Stop the old cluster.
5) Initialize the new cluster.
6) Run pg_upgrade.
7) Start new cluster.
8) Take dump from it.
9) Run deletion script (Oops forgot this part!)Presumably the check to match the old dump against the new one is also
performed?
Yes. That's run with command_ok() at the end.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Apr 05, 2017 at 11:13:33AM -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
I'm all for adding tests into pg_dump which do things like drop columns
and change column names and other cases which could impact if the
pg_dump is correct or not, and there's nothing preventing those tests
from being added in the existing structure. Certainly, before we remove
the coverage provided by running the serial test suite and then using
pg_upgrade, we should analyze what is being tested and ensure that we're
providing that same set of testing in the pg_dump TAP tests.I don't think you grasped my basic point, which is that I'm worried about
emergent cases that we don't foresee needing to test (and that no amount
of code coverage checking would have shown up as being overlooked).
Admittedly, relying on the core regression tests to trigger such cases is
a pretty haphazard strategy, but it's way better than no strategy at all.The tests that were added to the core regression suite were done so for
a reason and hopefully we can identify cases where it'd make sense to
also run those tests for pg_upgrade/pg_dump testing.
I think you _are_ missing Tom's point. We've caught pg_dump and pg_upgrade
bugs thanks to regression database objects created for purposes unrelated to
pg_dump. It's true that there exist other test strategies that are more
efficient or catch more bugs overall. None of them substitute 100% for the
serendipity seen in testing dump/restore on the regression database.
More-or-less
anything that materially changes the catalog should be included, I would
think. Things that are only really only working with the heap/index
files don't really need to be performed because the pg_upgrade process
doesn't change those.
That is formally true.
Also, I agree with Andres that this is not a thread for discussing test
changes beyond mechanical translation of the pg_upgrade test suite.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Apr 6, 2017 at 7:48 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:
On Wed, Apr 5, 2017 at 10:24 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Michael Paquier (michael.paquier@gmail.com) wrote:
1) Initialize the old cluster and start it.
2) create a bunch of databases with full range of ascii characters.
3) Run regression tests.
4) Take dump on old cluster.
4) Stop the old cluster.
5) Initialize the new cluster.
6) Run pg_upgrade.
7) Start new cluster.
8) Take dump from it.
9) Run deletion script (Oops forgot this part!)Presumably the check to match the old dump against the new one is also
performed?Yes. That's run with command_ok() at the end.
Attached is an updated patch to use --no-sync with pg_dumpall calls.
--
Michael
Attachments:
pgupgrade-tap-test-v3.patchapplication/octet-stream; name=pgupgrade-tap-test-v3.patchDownload
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index f5dfb91ac1..a4678462c3 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -444,7 +444,6 @@ $ENV{CONFIG}="Debug";
<userinput>vcregress isolationcheck</userinput>
<userinput>vcregress bincheck</userinput>
<userinput>vcregress recoverycheck</userinput>
-<userinput>vcregress upgradecheck</userinput>
</screen>
To change the schedule used (default is parallel), append it to the
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 8823288708..a78b39f3e5 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -35,9 +35,8 @@ clean distclean maintainer-clean:
pg_upgrade_dump_globals.sql \
pg_upgrade_dump_*.custom pg_upgrade_*.log
-check: test.sh all
- MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install
+check:
+ $(prove_check)
-# disabled because it upsets the build farm
-#installcheck: test.sh
-# MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) $(SHELL) $<
+installcheck:
+ $(prove_installcheck)
diff --git a/src/bin/pg_upgrade/TESTING b/src/bin/pg_upgrade/TESTING
index 4ecfc5798e..4885164d04 100644
--- a/src/bin/pg_upgrade/TESTING
+++ b/src/bin/pg_upgrade/TESTING
@@ -61,7 +61,7 @@ steps:
7) Diff the regression database dump file with the regression dump
file loaded into the old server.
-The shell script test.sh in this directory performs more or less this
+The TAP test scripts in this directory perform more or less this
procedure. You can invoke it by running
make check
@@ -69,13 +69,3 @@ procedure. You can invoke it by running
or by running
make installcheck
-
-if "make install" (or "make install-world") were done beforehand.
-When invoked without arguments, it will run an upgrade from the
-version in this source tree to a new instance of the same version. To
-test an upgrade from a different version, invoke it like this:
-
- make installcheck oldbindir=...otherversion/bin oldsrc=...somewhere/postgresql
-
-In this case, you will have to manually eyeball the resulting dump
-diff for version-specific differences, as explained above.
diff --git a/src/bin/pg_upgrade/t/010_pg_upgrade.pl b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
new file mode 100644
index 0000000000..a58ee267bc
--- /dev/null
+++ b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
@@ -0,0 +1,93 @@
+# Set of tests for pg_upgrade.
+use strict;
+use warnings;
+use Cwd;
+use Config;
+use File::Basename;
+use IPC::Run;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 9;
+
+program_help_ok('pg_upgrade');
+program_version_ok('pg_upgrade');
+program_options_handling_ok('pg_upgrade');
+
+# Generate a database with a name made of a range of ASCII characters.
+sub generate_db
+{
+ my ($node, $from_char, $to_char) = @_;
+
+ my $dbname = '';
+ for my $i ($from_char .. $to_char)
+ {
+ next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
+ $dbname = $dbname . sprintf('%c', $i);
+ }
+ $node->run_log([ 'createdb', '--port', $node->port, $dbname ]);
+}
+
+my $startdir = getcwd();
+
+# From now on, the test of pg_upgrade consists in setting up an instance
+# on which regression tests are run. This is the source instance used
+# for the upgrade. Then a new, fresh instance is created, and is used
+# as the target instance for the upgrade. Before running an upgrade a
+# logical dump of the old instance is taken, and a second logical dump
+# of the new instance is taken after the upgrade. The upgrade test
+# passes if there are no differences after running pg_upgrade.
+
+# Temporary location for dumps taken
+my $tempdir = TestLib::tempdir;
+
+# Initialize node to upgrade
+my $oldnode = get_new_node('old_node');
+$oldnode->init;
+$oldnode->start;
+
+# Creating databases with names covering most ASCII bytes
+generate_db($oldnode, 1, 45);
+generate_db($oldnode, 46, 90);
+generate_db($oldnode, 91, 127);
+
+# Run regression tests on the old instance
+chdir dirname($ENV{PG_REGRESS});
+$oldnode->run_log([ 'createdb', '--port', $oldnode->port, 'regression' ]);
+run_log([$ENV{PG_REGRESS}, '--schedule', './serial_schedule',
+ '--dlpath', '.', '--bindir=', '--use-existing',
+ '--port', $oldnode->port]);
+
+# Take a dump before performing the upgrade as a base comparison.
+run_log(['pg_dumpall', '--no-sync', '--port', $oldnode->port,
+ '-f', "$tempdir/dump1.sql"]);
+
+# Move back to current directory, all logs generated need to be located
+# at the origin.
+chdir $startdir;
+
+# Update the instance.
+$oldnode->stop;
+
+# pg_upgrade needs the location of the old and new binaries. This test
+# relying on binaries being in PATH, so is pg_config. So fetch from it
+# the real binary location.
+my ($bindir, $stderr);
+my $result = IPC::Run::run [ 'pg_config', '--bindir' ], '>', \$bindir, '2>', \$stderr;
+chomp($bindir);
+
+# Initialize a new node for the upgrade.
+my $newnode = get_new_node('new_node');
+$newnode->init;
+
+# Time for the real run.
+run_log(['pg_upgrade', '-d', $oldnode->data_dir, -D, $newnode->data_dir,
+ '-b', $bindir, '-B', $bindir, '-p', $oldnode->port, '-P', $newnode->port]);
+$newnode->start;
+
+# Take a second dump on the upgraded instance.
+run_log(['pg_dumpall', '--no-sync', '--port', $newnode->port,
+ '-f', "$tempdir/dump2.sql"]);
+
+# Compare the two dumps, there should be no differences.
+command_ok(['diff', '-q', "$tempdir/dump1.sql", "$tempdir/dump2.sql"],
+ 'Old and new dump checks after pg_upgrade');
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
deleted file mode 100644
index 841da034b0..0000000000
--- a/src/bin/pg_upgrade/test.sh
+++ /dev/null
@@ -1,248 +0,0 @@
-#!/bin/sh
-
-# src/bin/pg_upgrade/test.sh
-#
-# Test driver for pg_upgrade. Initializes a new database cluster,
-# runs the regression tests (to put in some data), runs pg_dumpall,
-# runs pg_upgrade, runs pg_dumpall again, compares the dumps.
-#
-# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
-# Portions Copyright (c) 1994, Regents of the University of California
-
-set -e
-
-: ${MAKE=make}
-
-# Guard against parallel make issues (see comments in pg_regress.c)
-unset MAKEFLAGS
-unset MAKELEVEL
-
-# Run a given "initdb" binary and overlay the regression testing
-# authentication configuration.
-standard_initdb() {
- "$1" -N
- if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
- then
- cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
- fi
- ../../test/regress/pg_regress --config-auth "$PGDATA"
-}
-
-# Establish how the server will listen for connections
-testhost=`uname -s`
-
-case $testhost in
- MINGW*)
- LISTEN_ADDRESSES="localhost"
- PGHOST=localhost
- ;;
- *)
- LISTEN_ADDRESSES=""
- # Select a socket directory. The algorithm is from the "configure"
- # script; the outcome mimics pg_regress.c:make_temp_sockdir().
- PGHOST=$PG_REGRESS_SOCK_DIR
- if [ "x$PGHOST" = x ]; then
- {
- dir=`(umask 077 &&
- mktemp -d /tmp/pg_upgrade_check-XXXXXX) 2>/dev/null` &&
- [ -d "$dir" ]
- } ||
- {
- dir=/tmp/pg_upgrade_check-$$-$RANDOM
- (umask 077 && mkdir "$dir")
- } ||
- {
- echo "could not create socket temporary directory in \"/tmp\""
- exit 1
- }
-
- PGHOST=$dir
- trap 'rm -rf "$PGHOST"' 0
- trap 'exit 3' 1 2 13 15
- fi
- ;;
-esac
-
-POSTMASTER_OPTS="-F -c listen_addresses=$LISTEN_ADDRESSES -k \"$PGHOST\""
-export PGHOST
-
-# don't rely on $PWD here, as old shells don't set it
-temp_root=`pwd`/tmp_check
-
-if [ "$1" = '--install' ]; then
- temp_install=$temp_root/install
- bindir=$temp_install/$bindir
- libdir=$temp_install/$libdir
-
- "$MAKE" -s -C ../.. install DESTDIR="$temp_install"
-
- # platform-specific magic to find the shared libraries; see pg_regress.c
- LD_LIBRARY_PATH=$libdir:$LD_LIBRARY_PATH
- export LD_LIBRARY_PATH
- DYLD_LIBRARY_PATH=$libdir:$DYLD_LIBRARY_PATH
- export DYLD_LIBRARY_PATH
- LIBPATH=$libdir:$LIBPATH
- export LIBPATH
- SHLIB_PATH=$libdir:$SHLIB_PATH
- export SHLIB_PATH
- PATH=$libdir:$PATH
-
- # We need to make it use psql from our temporary installation,
- # because otherwise the installcheck run below would try to
- # use psql from the proper installation directory, which might
- # be outdated or missing. But don't override anything else that's
- # already in EXTRA_REGRESS_OPTS.
- EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --bindir='$bindir'"
- export EXTRA_REGRESS_OPTS
-fi
-
-: ${oldbindir=$bindir}
-
-: ${oldsrc=../../..}
-oldsrc=`cd "$oldsrc" && pwd`
-newsrc=`cd ../../.. && pwd`
-
-PATH=$bindir:$PATH
-export PATH
-
-BASE_PGDATA=$temp_root/data
-PGDATA="$BASE_PGDATA.old"
-export PGDATA
-rm -rf "$BASE_PGDATA" "$PGDATA"
-
-logdir=`pwd`/log
-rm -rf "$logdir"
-mkdir "$logdir"
-
-# Clear out any environment vars that might cause libpq to connect to
-# the wrong postmaster (cf pg_regress.c)
-#
-# Some shells, such as NetBSD's, return non-zero from unset if the variable
-# is already unset. Since we are operating under 'set -e', this causes the
-# script to fail. To guard against this, set them all to an empty string first.
-PGDATABASE=""; unset PGDATABASE
-PGUSER=""; unset PGUSER
-PGSERVICE=""; unset PGSERVICE
-PGSSLMODE=""; unset PGSSLMODE
-PGREQUIRESSL=""; unset PGREQUIRESSL
-PGCONNECT_TIMEOUT=""; unset PGCONNECT_TIMEOUT
-PGHOSTADDR=""; unset PGHOSTADDR
-
-# Select a non-conflicting port number, similarly to pg_regress.c
-PG_VERSION_NUM=`grep '#define PG_VERSION_NUM' "$newsrc"/src/include/pg_config.h | awk '{print $3}'`
-PGPORT=`expr $PG_VERSION_NUM % 16384 + 49152`
-export PGPORT
-
-i=0
-while psql -X postgres </dev/null 2>/dev/null
-do
- i=`expr $i + 1`
- if [ $i -eq 16 ]
- then
- echo port $PGPORT apparently in use
- exit 1
- fi
- PGPORT=`expr $PGPORT + 1`
- export PGPORT
-done
-
-# buildfarm may try to override port via EXTRA_REGRESS_OPTS ...
-EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --port=$PGPORT"
-export EXTRA_REGRESS_OPTS
-
-# enable echo so the user can see what is being executed
-set -x
-
-standard_initdb "$oldbindir"/initdb
-"$oldbindir"/pg_ctl start -l "$logdir/postmaster1.log" -o "$POSTMASTER_OPTS" -w
-
-# Create databases with names covering the ASCII bytes other than NUL, BEL,
-# LF, or CR. BEL would ring the terminal bell in the course of this test, and
-# it is not otherwise a special case. PostgreSQL doesn't support the rest.
-dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
- if (i != 7 && i != 10 && i != 13) printf "%c", i }' </dev/null`
-# Exercise backslashes adjacent to double quotes, a Windows special case.
-dbname1='\"\'$dbname1'\\"\\\'
-dbname2=`awk 'BEGIN { for (i = 46; i < 91; i++) printf "%c", i }' </dev/null`
-dbname3=`awk 'BEGIN { for (i = 91; i < 128; i++) printf "%c", i }' </dev/null`
-createdb "$dbname1" || createdb_status=$?
-createdb "$dbname2" || createdb_status=$?
-createdb "$dbname3" || createdb_status=$?
-
-if "$MAKE" -C "$oldsrc" installcheck; then
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
- if [ "$newsrc" != "$oldsrc" ]; then
- oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%'; DROP FUNCTION public.myfunc(integer);"
- ;;
- 900??)
- fix_sql="SET bytea_output TO escape; UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%';"
- ;;
- 901??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
- ;;
- esac
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
- sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
- fi
-else
- make_installcheck_status=$?
-fi
-"$oldbindir"/pg_ctl -m fast stop
-if [ -n "$createdb_status" ]; then
- exit 1
-fi
-if [ -n "$make_installcheck_status" ]; then
- exit 1
-fi
-if [ -n "$psql_fix_sql_status" ]; then
- exit 1
-fi
-if [ -n "$pg_dumpall1_status" ]; then
- echo "pg_dumpall of pre-upgrade database cluster failed"
- exit 1
-fi
-
-PGDATA=$BASE_PGDATA
-
-standard_initdb 'initdb'
-
-pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "${PGDATA}" -b "$oldbindir" -B "$bindir" -p "$PGPORT" -P "$PGPORT"
-
-pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-
-case $testhost in
- MINGW*) cmd /c analyze_new_cluster.bat ;;
- *) sh ./analyze_new_cluster.sh ;;
-esac
-
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
-pg_ctl -m fast stop
-
-# no need to echo commands anymore
-set +x
-echo
-
-if [ -n "$pg_dumpall2_status" ]; then
- echo "pg_dumpall of post-upgrade database cluster failed"
- exit 1
-fi
-
-case $testhost in
- MINGW*) cmd /c delete_old_cluster.bat ;;
- *) sh ./delete_old_cluster.sh ;;
-esac
-
-if diff "$temp_root"/dump1.sql "$temp_root"/dump2.sql >/dev/null; then
- echo PASSED
- exit 0
-else
- echo "Files $temp_root/dump1.sql and $temp_root/dump2.sql differ"
- echo "dumps were not identical"
- exit 1
-fi
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 8933920d9b..7d6fb6200a 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -34,7 +34,7 @@ if (-e "src/tools/msvc/buildenv.pl")
my $what = shift || "";
if ($what =~
-/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|upgradecheck|bincheck|recoverycheck)$/i
+/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|bincheck|recoverycheck)$/i
)
{
$what = uc $what;
@@ -89,8 +89,7 @@ my %command = (
MODULESCHECK => \&modulescheck,
ISOLATIONCHECK => \&isolationcheck,
BINCHECK => \&bincheck,
- RECOVERYCHECK => \&recoverycheck,
- UPGRADECHECK => \&upgradecheck,);
+ RECOVERYCHECK => \&recoverycheck,);
my $proc = $command{$what};
@@ -382,126 +381,6 @@ sub standard_initdb
$ENV{PGDATA}) == 0);
}
-# This is similar to appendShellString(). Perl system(@args) bypasses
-# cmd.exe, so omit the caret escape layer.
-sub quote_system_arg
-{
- my $arg = shift;
-
- # Change N >= 0 backslashes before a double quote to 2N+1 backslashes.
- $arg =~ s/(\\*)"/${\($1 . $1)}\\"/gs;
-
- # Change N >= 1 backslashes at end of argument to 2N backslashes.
- $arg =~ s/(\\+)$/${\($1 . $1)}/gs;
-
- # Wrap the whole thing in unescaped double quotes.
- return "\"$arg\"";
-}
-
-# Generate a database with a name made of a range of ASCII characters, useful
-# for testing pg_upgrade.
-sub generate_db
-{
- my ($prefix, $from_char, $to_char, $suffix) = @_;
-
- my $dbname = $prefix;
- for my $i ($from_char .. $to_char)
- {
- next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
- $dbname = $dbname . sprintf('%c', $i);
- }
- $dbname .= $suffix;
-
- system('createdb', quote_system_arg($dbname));
- my $status = $? >> 8;
- exit $status if $status;
-}
-
-sub upgradecheck
-{
- my $status;
- my $cwd = getcwd();
-
- # Much of this comes from the pg_upgrade test.sh script,
- # but it only covers the --install case, and not the case
- # where the old and new source or bin dirs are different.
- # i.e. only this version to this version check. That's
- # what pg_upgrade's "make check" does.
-
- $ENV{PGHOST} = 'localhost';
- $ENV{PGPORT} ||= 50432;
- my $tmp_root = "$topdir/src/bin/pg_upgrade/tmp_check";
- (mkdir $tmp_root || die $!) unless -d $tmp_root;
- my $upg_tmp_install = "$tmp_root/install"; # unshared temp install
- print "Setting up temp install\n\n";
- Install($upg_tmp_install, "all", $config);
-
- # Install does a chdir, so change back after that
- chdir $cwd;
- my ($bindir, $libdir, $oldsrc, $newsrc) =
- ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
- $ENV{PATH} = "$bindir;$ENV{PATH}";
- my $data = "$tmp_root/data";
- $ENV{PGDATA} = "$data.old";
- my $logdir = "$topdir/src/bin/pg_upgrade/log";
- (mkdir $logdir || die $!) unless -d $logdir;
- print "\nRunning initdb on old cluster\n\n";
- standard_initdb() or exit 1;
- print "\nStarting old cluster\n\n";
- my @args = ('pg_ctl', 'start', '-l', "$logdir/postmaster1.log");
- system(@args) == 0 or exit 1;
-
- print "\nCreating databases with names covering most ASCII bytes\n\n";
- generate_db("\\\"\\", 1, 45, "\\\\\"\\\\\\");
- generate_db('', 46, 90, '');
- generate_db('', 91, 127, '');
-
- print "\nSetting up data for upgrading\n\n";
- installcheck();
-
- # now we can chdir into the source dir
- chdir "$topdir/src/bin/pg_upgrade";
- print "\nDumping old cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump1.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping old cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- $ENV{PGDATA} = "$data";
- print "\nSetting up new cluster\n\n";
- standard_initdb() or exit 1;
- print "\nRunning pg_upgrade\n\n";
- @args = (
- 'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
- $bindir, '-B', $bindir);
- system(@args) == 0 or exit 1;
- print "\nStarting new cluster\n\n";
- @args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
- system(@args) == 0 or exit 1;
- print "\nSetting up stats on new cluster\n\n";
- system(".\\analyze_new_cluster.bat") == 0 or exit 1;
- print "\nDumping new cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump2.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping new cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- print "\nDeleting old cluster\n\n";
- system(".\\delete_old_cluster.bat") == 0 or exit 1;
- print "\nComparing old and new cluster dumps\n\n";
-
- @args = ('diff', '-q', "$tmp_root/dump1.sql", "$tmp_root/dump2.sql");
- system(@args);
- $status = $?;
- if (!$status)
- {
- print "PASSED\n";
- }
- else
- {
- print "dumps not identical!\n";
- exit(1);
- }
-}
-
sub fetchRegressOpts
{
my $handle;
@@ -607,7 +486,6 @@ sub usage
" modulescheck run tests of modules in src/test/modules/\n",
" plcheck run tests of PL languages\n",
" recoverycheck run recovery test suite\n",
- " upgradecheck run tests of pg_upgrade\n",
"\nOptions for <schedule>:\n",
" serial serial mode\n",
" parallel parallel mode\n";
Michael,
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Thu, Apr 6, 2017 at 7:48 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:On Wed, Apr 5, 2017 at 10:24 PM, Stephen Frost <sfrost@snowman.net> wrote:
* Michael Paquier (michael.paquier@gmail.com) wrote:
1) Initialize the old cluster and start it.
2) create a bunch of databases with full range of ascii characters.
3) Run regression tests.
4) Take dump on old cluster.
4) Stop the old cluster.
5) Initialize the new cluster.
6) Run pg_upgrade.
7) Start new cluster.
8) Take dump from it.
9) Run deletion script (Oops forgot this part!)Presumably the check to match the old dump against the new one is also
performed?Yes. That's run with command_ok() at the end.
Attached is an updated patch to use --no-sync with pg_dumpall calls.
Some of those were specifically left around to test those code paths.
I'm not sure if these were those or not though, Andrew was the one who
reviewed the various pg_dumpall calls to add --no-sync in places.
Thanks!
Stephen
On Fri, Apr 14, 2017 at 8:03 PM, Stephen Frost <sfrost@snowman.net> wrote:
Some of those were specifically left around to test those code paths.
I'm not sure if these were those or not though, Andrew was the one who
reviewed the various pg_dumpall calls to add --no-sync in places.
Well, Andrew has pushed the patch I have written, and the calls of
pg_dumpall in pg_upgrade use --no-sync. The ones intentionally left
are in src/bin/pg_dump/t/002_pg_dump.pl.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Michael Paquier (michael.paquier@gmail.com) wrote:
On Fri, Apr 14, 2017 at 8:03 PM, Stephen Frost <sfrost@snowman.net> wrote:
Some of those were specifically left around to test those code paths.
I'm not sure if these were those or not though, Andrew was the one who
reviewed the various pg_dumpall calls to add --no-sync in places.Well, Andrew has pushed the patch I have written, and the calls of
pg_dumpall in pg_upgrade use --no-sync. The ones intentionally left
are in src/bin/pg_dump/t/002_pg_dump.pl.
Ok. :)
Thanks!
Stephen
On 4/14/17 02:00, Michael Paquier wrote:
Attached is an updated patch to use --no-sync with pg_dumpall calls.
Please send a rebased patch.
I propose splitting the single Perl script into three separate test
files: one for basic command-line option handling and so on (I would
like to expand that later), one for the main upgrade test, and one for
the funny database names tests.
In the testing file, you removed the paragraph that explains how to do
cross-version upgrade testing. It's unfortunate that we would lose that
functionality. What can we do about that?
We also need to have a plan for handling the build farm. Maybe keep the
vcregress.pl upgradecheck target as a thin wrapper for the time being?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Peter Eisentraut wrote:
I propose splitting the single Perl script into three separate test
files: one for basic command-line option handling and so on (I would
like to expand that later), one for the main upgrade test, and one for
the funny database names tests.
Check.
We also need to have a plan for handling the build farm. Maybe keep the
vcregress.pl upgradecheck target as a thin wrapper for the time being?
The buildfarm already runs "make check" in src/bin/ when TAP tests are
enabled, which should be enough to trigger the rewritten test, no?
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
Peter Eisentraut wrote:
We also need to have a plan for handling the build farm. Maybe keep the
vcregress.pl upgradecheck target as a thin wrapper for the time being?
The buildfarm already runs "make check" in src/bin/ when TAP tests are
enabled, which should be enough to trigger the rewritten test, no?
I think Peter's on about the Windows case. Not sure how that's handled,
but it's not "make check".
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Sep 6, 2017 at 11:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
Peter Eisentraut wrote:
We also need to have a plan for handling the build farm. Maybe keep the
vcregress.pl upgradecheck target as a thin wrapper for the time being?The buildfarm already runs "make check" in src/bin/ when TAP tests are
enabled, which should be enough to trigger the rewritten test, no?I think Peter's on about the Windows case. Not sure how that's handled,
but it's not "make check".
For MSVC, one can use "vcregress.bat upgradecheck". So perhaps we
could keep upgradecheck for a short time but make it a noop instead
with this patch, and then remove it once buildfarm animals are
upgraded to a newer client version? I would prefer seeing a simple
removal of upgradecheck at the end, and put all TAP tests for binaries
under the bincheck path. This feels more natural.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Sep 6, 2017 at 10:05 PM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
Please send a rebased patch.
I propose splitting the single Perl script into three separate test
files: one for basic command-line option handling and so on (I would
like to expand that later), one for the main upgrade test, and one for
the funny database names tests.
That makes sense. There will be additional overhead with the creation
of an extra server though.
In the testing file, you removed the paragraph that explains how to do
cross-version upgrade testing. It's unfortunate that we would lose that
functionality. What can we do about that?
Right, simply removing support for something which has been here for a
long time is no fun. I think that we should add in PostgresNode
objects a new bindir variable which will be used to define path to
binaries. Any new node created needs to go through init() or
init_from_backup(), so a node created with init() would set this
bindir to what pg_config in PATH reports, or to the value defined by
the caller if it is defined (let's use an option for %params). A node
created from init_from_backup() inherits the path of its root node.
This requires a bit of refactoring first. This could help also for
cross version tests out of the code core.
In the existing scripts, there are the following variables:
- oldsrc, old version's source tree
- oldbindir, old version's installed bin dir
- bindir, this version's installed bin dir.
- libdir, this version's installed lib dir
bindir and libdir are pointing to the currently installed version in
the tree, so we could do without it, no? oldbindir and oldsrc need to
be kept around to enforce the position of binaries for the old
version, as well as a proper shape of the original dump being compared
(+ drop of the past functions).
Then, for the pg_upgrade tests, let's check for ENV{oldbindir} and
enforce the bindir value of the PostgresNode to-be-upgraded. And also
for ENV{oldsrc}, first check if it is defined, and then do the
existing psql/dump changes. So one, in order to run cross-version
checks, would just need to rely on the fact that the version where
installcheck runs is the new version. Does that sound acceptable?
Looking at 5bab198, those don't run that often, but I definitely agree
that breaking something for no apparent reason is not cool either ;p
We also need to have a plan for handling the build farm. Maybe keep the
vcregress.pl upgradecheck target as a thin wrapper for the time being?
Or we could make upgradecheck a noop, then remove it once all the MSVC
animals have upgraded to a newer version of the buildfarm client which
does not use upgradecheck anymore (I am fine to send a patch or a pull
request to Andrew for that).
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Sep 7, 2017 at 11:14 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:
Or we could make upgradecheck a noop, then remove it once all the MSVC
animals have upgraded to a newer version of the buildfarm client which
does not use upgradecheck anymore (I am fine to send a patch or a pull
request to Andrew for that).
This patch is logged as "waiting on author" in the current commit
fest, but any new patch will depend on the feedback that any other
hacker has to offer based on the set of ideas I have posted upthread.
Hence I am yet unsure what is the correct way to move things forward.
So, any opinions? Peter or others?
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier wrote:
On Thu, Sep 7, 2017 at 11:14 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:Or we could make upgradecheck a noop, then remove it once all the MSVC
animals have upgraded to a newer version of the buildfarm client which
does not use upgradecheck anymore (I am fine to send a patch or a pull
request to Andrew for that).This patch is logged as "waiting on author" in the current commit
fest, but any new patch will depend on the feedback that any other
hacker has to offer based on the set of ideas I have posted upthread.
Hence I am yet unsure what is the correct way to move things forward.
So, any opinions? Peter or others?
I think the first step is to send the rebased version of the patch. It
was last posted in April ...
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Sep 19, 2017 at 6:15 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Michael Paquier wrote:
On Thu, Sep 7, 2017 at 11:14 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:Or we could make upgradecheck a noop, then remove it once all the MSVC
animals have upgraded to a newer version of the buildfarm client which
does not use upgradecheck anymore (I am fine to send a patch or a pull
request to Andrew for that).This patch is logged as "waiting on author" in the current commit
fest, but any new patch will depend on the feedback that any other
hacker has to offer based on the set of ideas I have posted upthread.
Hence I am yet unsure what is the correct way to move things forward.
So, any opinions? Peter or others?I think the first step is to send the rebased version of the patch. It
was last posted in April ...
Here you go. I have not done anything fancy for cross-version tests yet.
--
Michael
Attachments:
pgupgrade-tap-test-v4.patchapplication/octet-stream; name=pgupgrade-tap-test-v4.patchDownload
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index 1861e7e2f7..4ac6920f2c 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -445,7 +445,6 @@ $ENV{CONFIG}="Debug";
<userinput>vcregress isolationcheck</userinput>
<userinput>vcregress bincheck</userinput>
<userinput>vcregress recoverycheck</userinput>
-<userinput>vcregress upgradecheck</userinput>
</screen>
To change the schedule used (default is parallel), append it to the
diff --git a/src/bin/pg_upgrade/Makefile b/src/bin/pg_upgrade/Makefile
index 1d6ee702c6..bccab1d723 100644
--- a/src/bin/pg_upgrade/Makefile
+++ b/src/bin/pg_upgrade/Makefile
@@ -36,8 +36,8 @@ clean distclean maintainer-clean:
pg_upgrade_dump_globals.sql \
pg_upgrade_dump_*.custom pg_upgrade_*.log
-check: test.sh all
- MAKE=$(MAKE) bindir=$(bindir) libdir=$(libdir) EXTRA_REGRESS_OPTS="$(EXTRA_REGRESS_OPTS)" $(SHELL) $< --install
+check:
+ $(prove_check)
-# installcheck is not supported because there's no meaningful way to test
-# pg_upgrade against a single already-running server
+installcheck:
+ $(prove_installcheck)
diff --git a/src/bin/pg_upgrade/t/010_pg_upgrade.pl b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
new file mode 100644
index 0000000000..a58ee267bc
--- /dev/null
+++ b/src/bin/pg_upgrade/t/010_pg_upgrade.pl
@@ -0,0 +1,93 @@
+# Set of tests for pg_upgrade.
+use strict;
+use warnings;
+use Cwd;
+use Config;
+use File::Basename;
+use IPC::Run;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 9;
+
+program_help_ok('pg_upgrade');
+program_version_ok('pg_upgrade');
+program_options_handling_ok('pg_upgrade');
+
+# Generate a database with a name made of a range of ASCII characters.
+sub generate_db
+{
+ my ($node, $from_char, $to_char) = @_;
+
+ my $dbname = '';
+ for my $i ($from_char .. $to_char)
+ {
+ next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
+ $dbname = $dbname . sprintf('%c', $i);
+ }
+ $node->run_log([ 'createdb', '--port', $node->port, $dbname ]);
+}
+
+my $startdir = getcwd();
+
+# From now on, the test of pg_upgrade consists in setting up an instance
+# on which regression tests are run. This is the source instance used
+# for the upgrade. Then a new, fresh instance is created, and is used
+# as the target instance for the upgrade. Before running an upgrade a
+# logical dump of the old instance is taken, and a second logical dump
+# of the new instance is taken after the upgrade. The upgrade test
+# passes if there are no differences after running pg_upgrade.
+
+# Temporary location for dumps taken
+my $tempdir = TestLib::tempdir;
+
+# Initialize node to upgrade
+my $oldnode = get_new_node('old_node');
+$oldnode->init;
+$oldnode->start;
+
+# Creating databases with names covering most ASCII bytes
+generate_db($oldnode, 1, 45);
+generate_db($oldnode, 46, 90);
+generate_db($oldnode, 91, 127);
+
+# Run regression tests on the old instance
+chdir dirname($ENV{PG_REGRESS});
+$oldnode->run_log([ 'createdb', '--port', $oldnode->port, 'regression' ]);
+run_log([$ENV{PG_REGRESS}, '--schedule', './serial_schedule',
+ '--dlpath', '.', '--bindir=', '--use-existing',
+ '--port', $oldnode->port]);
+
+# Take a dump before performing the upgrade as a base comparison.
+run_log(['pg_dumpall', '--no-sync', '--port', $oldnode->port,
+ '-f', "$tempdir/dump1.sql"]);
+
+# Move back to current directory, all logs generated need to be located
+# at the origin.
+chdir $startdir;
+
+# Update the instance.
+$oldnode->stop;
+
+# pg_upgrade needs the location of the old and new binaries. This test
+# relying on binaries being in PATH, so is pg_config. So fetch from it
+# the real binary location.
+my ($bindir, $stderr);
+my $result = IPC::Run::run [ 'pg_config', '--bindir' ], '>', \$bindir, '2>', \$stderr;
+chomp($bindir);
+
+# Initialize a new node for the upgrade.
+my $newnode = get_new_node('new_node');
+$newnode->init;
+
+# Time for the real run.
+run_log(['pg_upgrade', '-d', $oldnode->data_dir, -D, $newnode->data_dir,
+ '-b', $bindir, '-B', $bindir, '-p', $oldnode->port, '-P', $newnode->port]);
+$newnode->start;
+
+# Take a second dump on the upgraded instance.
+run_log(['pg_dumpall', '--no-sync', '--port', $newnode->port,
+ '-f', "$tempdir/dump2.sql"]);
+
+# Compare the two dumps, there should be no differences.
+command_ok(['diff', '-q', "$tempdir/dump1.sql", "$tempdir/dump2.sql"],
+ 'Old and new dump checks after pg_upgrade');
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
deleted file mode 100644
index f4556341f3..0000000000
--- a/src/bin/pg_upgrade/test.sh
+++ /dev/null
@@ -1,262 +0,0 @@
-#!/bin/sh
-
-# src/bin/pg_upgrade/test.sh
-#
-# Test driver for pg_upgrade. Initializes a new database cluster,
-# runs the regression tests (to put in some data), runs pg_dumpall,
-# runs pg_upgrade, runs pg_dumpall again, compares the dumps.
-#
-# Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
-# Portions Copyright (c) 1994, Regents of the University of California
-
-set -e
-
-: ${MAKE=make}
-
-# Guard against parallel make issues (see comments in pg_regress.c)
-unset MAKEFLAGS
-unset MAKELEVEL
-
-# Run a given "initdb" binary and overlay the regression testing
-# authentication configuration.
-standard_initdb() {
- "$1" -N
- if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
- then
- cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
- fi
- ../../test/regress/pg_regress --config-auth "$PGDATA"
-}
-
-# Establish how the server will listen for connections
-testhost=`uname -s`
-
-case $testhost in
- MINGW*)
- LISTEN_ADDRESSES="localhost"
- PGHOST=localhost
- ;;
- *)
- LISTEN_ADDRESSES=""
- # Select a socket directory. The algorithm is from the "configure"
- # script; the outcome mimics pg_regress.c:make_temp_sockdir().
- PGHOST=$PG_REGRESS_SOCK_DIR
- if [ "x$PGHOST" = x ]; then
- {
- dir=`(umask 077 &&
- mktemp -d /tmp/pg_upgrade_check-XXXXXX) 2>/dev/null` &&
- [ -d "$dir" ]
- } ||
- {
- dir=/tmp/pg_upgrade_check-$$-$RANDOM
- (umask 077 && mkdir "$dir")
- } ||
- {
- echo "could not create socket temporary directory in \"/tmp\""
- exit 1
- }
-
- PGHOST=$dir
- trap 'rm -rf "$PGHOST"' 0
- trap 'exit 3' 1 2 13 15
- fi
- ;;
-esac
-
-POSTMASTER_OPTS="-F -c listen_addresses=$LISTEN_ADDRESSES -k \"$PGHOST\""
-export PGHOST
-
-# don't rely on $PWD here, as old shells don't set it
-temp_root=`pwd`/tmp_check
-
-if [ "$1" = '--install' ]; then
- temp_install=$temp_root/install
- bindir=$temp_install/$bindir
- libdir=$temp_install/$libdir
-
- "$MAKE" -s -C ../.. install DESTDIR="$temp_install"
-
- # platform-specific magic to find the shared libraries; see pg_regress.c
- LD_LIBRARY_PATH=$libdir:$LD_LIBRARY_PATH
- export LD_LIBRARY_PATH
- DYLD_LIBRARY_PATH=$libdir:$DYLD_LIBRARY_PATH
- export DYLD_LIBRARY_PATH
- LIBPATH=$libdir:$LIBPATH
- export LIBPATH
- SHLIB_PATH=$libdir:$SHLIB_PATH
- export SHLIB_PATH
- PATH=$libdir:$PATH
-
- # We need to make it use psql from our temporary installation,
- # because otherwise the installcheck run below would try to
- # use psql from the proper installation directory, which might
- # be outdated or missing. But don't override anything else that's
- # already in EXTRA_REGRESS_OPTS.
- EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --bindir='$bindir'"
- export EXTRA_REGRESS_OPTS
-fi
-
-: ${oldbindir=$bindir}
-
-: ${oldsrc=../../..}
-oldsrc=`cd "$oldsrc" && pwd`
-newsrc=`cd ../../.. && pwd`
-
-PATH=$bindir:$PATH
-export PATH
-
-BASE_PGDATA=$temp_root/data
-PGDATA="$BASE_PGDATA.old"
-export PGDATA
-rm -rf "$BASE_PGDATA" "$PGDATA"
-
-logdir=`pwd`/log
-rm -rf "$logdir"
-mkdir "$logdir"
-
-# Clear out any environment vars that might cause libpq to connect to
-# the wrong postmaster (cf pg_regress.c)
-#
-# Some shells, such as NetBSD's, return non-zero from unset if the variable
-# is already unset. Since we are operating under 'set -e', this causes the
-# script to fail. To guard against this, set them all to an empty string first.
-PGDATABASE=""; unset PGDATABASE
-PGUSER=""; unset PGUSER
-PGSERVICE=""; unset PGSERVICE
-PGSSLMODE=""; unset PGSSLMODE
-PGREQUIRESSL=""; unset PGREQUIRESSL
-PGCONNECT_TIMEOUT=""; unset PGCONNECT_TIMEOUT
-PGHOSTADDR=""; unset PGHOSTADDR
-
-# Select a non-conflicting port number, similarly to pg_regress.c
-PG_VERSION_NUM=`grep '#define PG_VERSION_NUM' "$newsrc"/src/include/pg_config.h | awk '{print $3}'`
-PGPORT=`expr $PG_VERSION_NUM % 16384 + 49152`
-export PGPORT
-
-i=0
-while psql -X postgres </dev/null 2>/dev/null
-do
- i=`expr $i + 1`
- if [ $i -eq 16 ]
- then
- echo port $PGPORT apparently in use
- exit 1
- fi
- PGPORT=`expr $PGPORT + 1`
- export PGPORT
-done
-
-# buildfarm may try to override port via EXTRA_REGRESS_OPTS ...
-EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --port=$PGPORT"
-export EXTRA_REGRESS_OPTS
-
-# enable echo so the user can see what is being executed
-set -x
-
-standard_initdb "$oldbindir"/initdb
-"$oldbindir"/pg_ctl start -l "$logdir/postmaster1.log" -o "$POSTMASTER_OPTS" -w
-
-# Create databases with names covering the ASCII bytes other than NUL, BEL,
-# LF, or CR. BEL would ring the terminal bell in the course of this test, and
-# it is not otherwise a special case. PostgreSQL doesn't support the rest.
-dbname1=`awk 'BEGIN { for (i= 1; i < 46; i++)
- if (i != 7 && i != 10 && i != 13) printf "%c", i }' </dev/null`
-# Exercise backslashes adjacent to double quotes, a Windows special case.
-dbname1='\"\'$dbname1'\\"\\\'
-dbname2=`awk 'BEGIN { for (i = 46; i < 91; i++) printf "%c", i }' </dev/null`
-dbname3=`awk 'BEGIN { for (i = 91; i < 128; i++) printf "%c", i }' </dev/null`
-createdb "$dbname1" || createdb_status=$?
-createdb "$dbname2" || createdb_status=$?
-createdb "$dbname3" || createdb_status=$?
-
-if "$MAKE" -C "$oldsrc" installcheck; then
- oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
-
- # before dumping, get rid of objects not existing in later versions
- if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer); DROP FUNCTION public.oldstyle_length(integer, text);"
- ;;
- *)
- fix_sql="DROP FUNCTION public.oldstyle_length(integer, text);"
- ;;
- esac
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
- fi
-
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
-
- if [ "$newsrc" != "$oldsrc" ]; then
- # update references to old source tree's regress.so etc
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%';"
- ;;
- *)
- fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
- ;;
- esac
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
- sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
- fi
-else
- make_installcheck_status=$?
-fi
-"$oldbindir"/pg_ctl -m fast stop
-if [ -n "$createdb_status" ]; then
- exit 1
-fi
-if [ -n "$make_installcheck_status" ]; then
- exit 1
-fi
-if [ -n "$psql_fix_sql_status" ]; then
- exit 1
-fi
-if [ -n "$pg_dumpall1_status" ]; then
- echo "pg_dumpall of pre-upgrade database cluster failed"
- exit 1
-fi
-
-PGDATA=$BASE_PGDATA
-
-standard_initdb 'initdb'
-
-pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "${PGDATA}" -b "$oldbindir" -B "$bindir" -p "$PGPORT" -P "$PGPORT"
-
-pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-
-case $testhost in
- MINGW*) cmd /c analyze_new_cluster.bat ;;
- *) sh ./analyze_new_cluster.sh ;;
-esac
-
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
-pg_ctl -m fast stop
-
-# no need to echo commands anymore
-set +x
-echo
-
-if [ -n "$pg_dumpall2_status" ]; then
- echo "pg_dumpall of post-upgrade database cluster failed"
- exit 1
-fi
-
-case $testhost in
- MINGW*) cmd /c delete_old_cluster.bat ;;
- *) sh ./delete_old_cluster.sh ;;
-esac
-
-if diff "$temp_root"/dump1.sql "$temp_root"/dump2.sql >/dev/null; then
- echo PASSED
- exit 0
-else
- echo "Files $temp_root/dump1.sql and $temp_root/dump2.sql differ"
- echo "dumps were not identical"
- exit 1
-fi
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 2904679114..3a91f99525 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -34,7 +34,7 @@ if (-e "src/tools/msvc/buildenv.pl")
my $what = shift || "";
if ($what =~
-/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|upgradecheck|bincheck|recoverycheck|taptest)$/i
+/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|bincheck|recoverycheck|taptest)$/i
)
{
$what = uc $what;
@@ -83,7 +83,6 @@ my %command = (
ISOLATIONCHECK => \&isolationcheck,
BINCHECK => \&bincheck,
RECOVERYCHECK => \&recoverycheck,
- UPGRADECHECK => \&upgradecheck,
TAPTEST => \&taptest,);
my $proc = $command{$what};
@@ -407,126 +406,6 @@ sub standard_initdb
$ENV{PGDATA}) == 0);
}
-# This is similar to appendShellString(). Perl system(@args) bypasses
-# cmd.exe, so omit the caret escape layer.
-sub quote_system_arg
-{
- my $arg = shift;
-
- # Change N >= 0 backslashes before a double quote to 2N+1 backslashes.
- $arg =~ s/(\\*)"/${\($1 . $1)}\\"/gs;
-
- # Change N >= 1 backslashes at end of argument to 2N backslashes.
- $arg =~ s/(\\+)$/${\($1 . $1)}/gs;
-
- # Wrap the whole thing in unescaped double quotes.
- return "\"$arg\"";
-}
-
-# Generate a database with a name made of a range of ASCII characters, useful
-# for testing pg_upgrade.
-sub generate_db
-{
- my ($prefix, $from_char, $to_char, $suffix) = @_;
-
- my $dbname = $prefix;
- for my $i ($from_char .. $to_char)
- {
- next if $i == 7 || $i == 10 || $i == 13; # skip BEL, LF, and CR
- $dbname = $dbname . sprintf('%c', $i);
- }
- $dbname .= $suffix;
-
- system('createdb', quote_system_arg($dbname));
- my $status = $? >> 8;
- exit $status if $status;
-}
-
-sub upgradecheck
-{
- my $status;
- my $cwd = getcwd();
-
- # Much of this comes from the pg_upgrade test.sh script,
- # but it only covers the --install case, and not the case
- # where the old and new source or bin dirs are different.
- # i.e. only this version to this version check. That's
- # what pg_upgrade's "make check" does.
-
- $ENV{PGHOST} = 'localhost';
- $ENV{PGPORT} ||= 50432;
- my $tmp_root = "$topdir/src/bin/pg_upgrade/tmp_check";
- (mkdir $tmp_root || die $!) unless -d $tmp_root;
- my $upg_tmp_install = "$tmp_root/install"; # unshared temp install
- print "Setting up temp install\n\n";
- Install($upg_tmp_install, "all", $config);
-
- # Install does a chdir, so change back after that
- chdir $cwd;
- my ($bindir, $libdir, $oldsrc, $newsrc) =
- ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
- $ENV{PATH} = "$bindir;$ENV{PATH}";
- my $data = "$tmp_root/data";
- $ENV{PGDATA} = "$data.old";
- my $logdir = "$topdir/src/bin/pg_upgrade/log";
- (mkdir $logdir || die $!) unless -d $logdir;
- print "\nRunning initdb on old cluster\n\n";
- standard_initdb() or exit 1;
- print "\nStarting old cluster\n\n";
- my @args = ('pg_ctl', 'start', '-l', "$logdir/postmaster1.log");
- system(@args) == 0 or exit 1;
-
- print "\nCreating databases with names covering most ASCII bytes\n\n";
- generate_db("\\\"\\", 1, 45, "\\\\\"\\\\\\");
- generate_db('', 46, 90, '');
- generate_db('', 91, 127, '');
-
- print "\nSetting up data for upgrading\n\n";
- installcheck();
-
- # now we can chdir into the source dir
- chdir "$topdir/src/bin/pg_upgrade";
- print "\nDumping old cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump1.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping old cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- $ENV{PGDATA} = "$data";
- print "\nSetting up new cluster\n\n";
- standard_initdb() or exit 1;
- print "\nRunning pg_upgrade\n\n";
- @args = (
- 'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
- $bindir, '-B', $bindir);
- system(@args) == 0 or exit 1;
- print "\nStarting new cluster\n\n";
- @args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
- system(@args) == 0 or exit 1;
- print "\nSetting up stats on new cluster\n\n";
- system(".\\analyze_new_cluster.bat") == 0 or exit 1;
- print "\nDumping new cluster\n\n";
- @args = ('pg_dumpall', '-f', "$tmp_root/dump2.sql");
- system(@args) == 0 or exit 1;
- print "\nStopping new cluster\n\n";
- system("pg_ctl stop") == 0 or exit 1;
- print "\nDeleting old cluster\n\n";
- system(".\\delete_old_cluster.bat") == 0 or exit 1;
- print "\nComparing old and new cluster dumps\n\n";
-
- @args = ('diff', '-q', "$tmp_root/dump1.sql", "$tmp_root/dump2.sql");
- system(@args);
- $status = $?;
- if (!$status)
- {
- print "PASSED\n";
- }
- else
- {
- print "dumps not identical!\n";
- exit(1);
- }
-}
-
sub fetchRegressOpts
{
my $handle;
@@ -636,7 +515,6 @@ sub usage
" plcheck run tests of PL languages\n",
" recoverycheck run recovery test suite\n",
" taptest run an arbitrary TAP test set\n",
- " upgradecheck run tests of pg_upgrade\n",
"\nOptions for <arg>: (used by check and installcheck)\n",
" serial serial mode\n",
" parallel parallel mode\n",
On 9/19/17 07:37, Michael Paquier wrote:
This patch is logged as "waiting on author" in the current commit
fest, but any new patch will depend on the feedback that any other
hacker has to offer based on the set of ideas I have posted upthread.
Hence I am yet unsure what is the correct way to move things forward.
So, any opinions? Peter or others?I think the first step is to send the rebased version of the patch. It
was last posted in April ...Here you go. I have not done anything fancy for cross-version tests yet.
To get things rolling, I have committed just the basic TAP tests, to
give it a spin on the build farm. I'll work through the rest in the
coming days.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Sep 20, 2017 at 7:57 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:
To get things rolling, I have committed just the basic TAP tests, to
give it a spin on the build farm. I'll work through the rest in the
coming days.
Thanks!
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9/19/17 19:00, Michael Paquier wrote:
On Wed, Sep 20, 2017 at 7:57 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:To get things rolling, I have committed just the basic TAP tests, to
give it a spin on the build farm. I'll work through the rest in the
coming days.
I have reverted this because of the build farm issue. Putting the patch
on hold in the CF until we have a new plan.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9/22/17 16:48, Peter Eisentraut wrote:
On 9/19/17 19:00, Michael Paquier wrote:
On Wed, Sep 20, 2017 at 7:57 AM, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:To get things rolling, I have committed just the basic TAP tests, to
give it a spin on the build farm. I'll work through the rest in the
coming days.I have reverted this because of the build farm issue. Putting the patch
on hold in the CF until we have a new plan.
Set to "Returned with feedback" now.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers