perlcritic and perltidy

Started by Andrew Dunstanover 7 years ago25 messages
#1Andrew Dunstan
andrew.dunstan@2ndquadrant.com
1 attachment(s)

The attached patch allows a clean run from the following script adapted
from pgperltidy:

{
    find . -type f -a \( -name '*.pl' -o -name '*.pm' \) -print
    find . -type f -perm -100 -exec file {} \; -print \
           | egrep -i ':.*perl[0-9]*\>' \
           | cut -d: -f1
} \
| sort -u  | xargs perlcritic --exclude ProhibitLeadingZeros

The changes are

* disable perlcritic on Gen_dummy_probes.pl, since it's generated code
from s2p
* protect a couple of package declarations in plperl code from
perltidy since it splits the lines and renders the 'no critic'
directives there useless
* mark a string eval in Catalog.pm with 'no critic', since it's
clearly necessary.

We should probably set up a policy file for perlcritic that turns off or
at least lowers the severity of the ProhibitLeadingZeros policy. Making
it severity 5 seems a bit odd.

w.r.t. perltidy, I note that our policy has these two lines:

--vertical-tightness=2
--vertical-tightness-closing=2

I've been looking at syncing the buildfarm client with our core code
perltidy settings. However, I don't actually like these two and I've
decided to exercise some editorial discretion and not use them.

Note that the perltidy man page does suggest that these can make things
less readable, and it also states unequivocally "You must also use the
-lp flag when you use the -vt flag". That is the --line-up-parentheses
flag and it's something we don't use. Enabling it would generate about
12k lines of diff.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

perlcriticfix.patchtext/x-patch; name=perlcriticfix.patchDownload
diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index 7497d9c..f387c86 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -250,7 +250,10 @@ sub ParseData
 
 			if ($lcnt == $rcnt)
 			{
-				eval '$hash_ref = ' . $_;
+				# We're treating the input line as a piece of Perl, so we
+				# need to use string eval here. Tell perlcritic we know what
+				# we're doing.
+				eval '$hash_ref = ' . $_; ## no critic (ProhibitStringyEval)
 				if (!ref $hash_ref)
 				{
 					die "$input_file: error parsing line $.:\n$_\n";
diff --git a/src/backend/utils/Gen_dummy_probes.pl b/src/backend/utils/Gen_dummy_probes.pl
index a38fea3..91d7968 100644
--- a/src/backend/utils/Gen_dummy_probes.pl
+++ b/src/backend/utils/Gen_dummy_probes.pl
@@ -14,6 +14,9 @@
 #
 #-------------------------------------------------------------------------
 
+# turn off perlcritic for autogened code
+## no critic
+
 $0 =~ s/^.*?(\w+)[\.\w+]*$/$1/;
 
 use strict;
diff --git a/src/pl/plperl/plc_perlboot.pl b/src/pl/plperl/plc_perlboot.pl
index ff05964..05334a6 100644
--- a/src/pl/plperl/plc_perlboot.pl
+++ b/src/pl/plperl/plc_perlboot.pl
@@ -51,9 +51,9 @@ sub ::encode_array_constructor
 }
 
 {
-
-	package PostgreSQL::InServer
-	  ;    ## no critic (RequireFilenameMatchesPackage);
+#<<< protect next line from perltidy so perlcritic annotation works
+	package PostgreSQL::InServer;  ## no critic (RequireFilenameMatchesPackage)
+#>>>
 	use strict;
 	use warnings;
 
diff --git a/src/pl/plperl/plc_trusted.pl b/src/pl/plperl/plc_trusted.pl
index 7b11a3f..dea3727 100644
--- a/src/pl/plperl/plc_trusted.pl
+++ b/src/pl/plperl/plc_trusted.pl
@@ -1,7 +1,8 @@
 #  src/pl/plperl/plc_trusted.pl
 
-package PostgreSQL::InServer::safe
-  ;    ## no critic (RequireFilenameMatchesPackage);
+#<<< protect next line from perltidy so perlcritic annotation works
+package PostgreSQL::InServer::safe; ## no critic (RequireFilenameMatchesPackage)
+#>>>
 
 # Load widely useful pragmas into plperl to make them available.
 #
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#1)
Re: perlcritic and perltidy

Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:

The attached patch allows a clean run from the following script adapted
from pgperltidy:

I'm hardly a perl expert, but those changes look reasonable.

w.r.t. perltidy, I note that our policy has these two lines:
--vertical-tightness=2
--vertical-tightness-closing=2
I've been looking at syncing the buildfarm client with our core code
perltidy settings. However, I don't actually like these two and I've
decided to exercise some editorial discretion and not use them.

Okay ...

Note that the perltidy man page does suggest that these can make things
less readable, and it also states unequivocally "You must also use the
-lp flag when you use the -vt flag". That is the --line-up-parentheses
flag and it's something we don't use. Enabling it would generate about
12k lines of diff.

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Since we just went to a new perltidy version, and made some other
policy changes for it, in HEAD, it'd make sense to make any further
changes in this same release cycle rather than drip drip drip over
multiple cycles. We just need to get some consensus about what
style we like.

regards, tom lane

#3Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Tom Lane (#2)
1 attachment(s)
Re: perlcritic and perltidy

On 05/06/2018 11:53 AM, Tom Lane wrote:

Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:

The attached patch allows a clean run from the following script adapted
from pgperltidy:

I'm hardly a perl expert, but those changes look reasonable.

w.r.t. perltidy, I note that our policy has these two lines:
--vertical-tightness=2
--vertical-tightness-closing=2
I've been looking at syncing the buildfarm client with our core code
perltidy settings. However, I don't actually like these two and I've
decided to exercise some editorial discretion and not use them.

Okay ...

Note that the perltidy man page does suggest that these can make things
less readable, and it also states unequivocally "You must also use the
-lp flag when you use the -vt flag". That is the --line-up-parentheses
flag and it's something we don't use. Enabling it would generate about
12k lines of diff.

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Since we just went to a new perltidy version, and made some other
policy changes for it, in HEAD, it'd make sense to make any further
changes in this same release cycle rather than drip drip drip over
multiple cycles. We just need to get some consensus about what
style we like.

Essentially it adds some vertical whitespace to structures so that the
enclosing braces etc appear on their own lines. A very typical change
looks like this:

    -         { code      => $code,
    +         {
    +           code      => $code,
                ucs       => $ucs,
                comment   => $rest,
                direction => $direction,
                f         => $in_file,
    -           l         => $. };
    +           l         => $.
    +         };

I am attaching the diff for a complete run with these two settings
removed. It's about 10k lines.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

pgerltidy-no-verticaltightness.patchtext/x-patch; name=pgerltidy-no-verticaltightness.patchDownload
diff --git a/contrib/intarray/bench/bench.pl b/contrib/intarray/bench/bench.pl
index 92035d6..c7d9a7b 100755
--- a/contrib/intarray/bench/bench.pl
+++ b/contrib/intarray/bench/bench.pl
@@ -118,7 +118,8 @@ if ($opt{o})
 print sprintf(
 	"total: %.02f sec; number: %d; for one: %.03f sec; found %d docs\n",
 	$elapsed, $b, $elapsed / $b,
-	$count + 1);
+	$count + 1
+);
 $dbi->disconnect;
 
 sub exec_sql
diff --git a/contrib/seg/sort-segments.pl b/contrib/seg/sort-segments.pl
index 04eafd9..e0e033e 100755
--- a/contrib/seg/sort-segments.pl
+++ b/contrib/seg/sort-segments.pl
@@ -21,7 +21,8 @@ foreach (
 		my $valB = pop @ar;
 		$valB =~ s/[~<> ]+//g;
 		$valA <=> $valB
-	} @rows)
+	} @rows
+  )
 {
 	print "$_\n";
 }
diff --git a/doc/src/sgml/mk_feature_tables.pl b/doc/src/sgml/mk_feature_tables.pl
index 476e50e..503c977 100644
--- a/doc/src/sgml/mk_feature_tables.pl
+++ b/doc/src/sgml/mk_feature_tables.pl
@@ -33,8 +33,10 @@ print "<tbody>\n";
 while (<$feat>)
 {
 	chomp;
-	my ($feature_id,      $feature_name, $subfeature_id,
-		$subfeature_name, $is_supported, $comments) = split /\t/;
+	my (
+		$feature_id,      $feature_name, $subfeature_id,
+		$subfeature_name, $is_supported, $comments
+	) = split /\t/;
 
 	$is_supported eq $yesno || next;
 
diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index 7497d9c..441cf8f 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -34,7 +34,8 @@ sub ParseHeader
 		'Oid'           => 'oid',
 		'NameData'      => 'name',
 		'TransactionId' => 'xid',
-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn'
+	);
 
 	my %catalog;
 	my $declaring_attributes = 0;
@@ -95,10 +96,12 @@ sub ParseHeader
 		elsif (/^DECLARE_(UNIQUE_)?INDEX\(\s*(\w+),\s*(\d+),\s*(.+)\)/)
 		{
 			push @{ $catalog{indexing} },
-			  { is_unique => $1 ? 1 : 0,
+			  {
+				is_unique => $1 ? 1 : 0,
 				index_name => $2,
 				index_oid  => $3,
-				index_decl => $4 };
+				index_decl => $4
+			  };
 		}
 		elsif (/^CATALOG\((\w+),(\d+),(\w+)\)/)
 		{
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
index fb61db0..d24dc5f 100644
--- a/src/backend/catalog/genbki.pl
+++ b/src/backend/catalog/genbki.pl
@@ -245,7 +245,8 @@ my %lookup_kind = (
 	pg_operator => \%operoids,
 	pg_opfamily => \%opfoids,
 	pg_proc     => \%procoids,
-	pg_type     => \%typeoids);
+	pg_type     => \%typeoids
+);
 
 
 # Open temp files
@@ -631,7 +632,8 @@ sub gen_pg_attribute
 				{ name => 'cmin',     type => 'cid' },
 				{ name => 'xmax',     type => 'xid' },
 				{ name => 'cmax',     type => 'cid' },
-				{ name => 'tableoid', type => 'oid' });
+				{ name => 'tableoid', type => 'oid' }
+			);
 			foreach my $attr (@SYS_ATTRS)
 			{
 				$attnum--;
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index 5fd5313..1cbc250 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -97,11 +97,13 @@ foreach my $row (@{ $catalog_data{pg_proc} })
 	next if $bki_values{prolang} ne $INTERNALlanguageId;
 
 	push @fmgr,
-	  { oid    => $bki_values{oid},
+	  {
+		oid    => $bki_values{oid},
 		strict => $bki_values{proisstrict},
 		retset => $bki_values{proretset},
 		nargs  => $bki_values{pronargs},
-		prosrc => $bki_values{prosrc}, };
+		prosrc => $bki_values{prosrc},
+	  };
 }
 
 # Emit headers for both files
diff --git a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
index 7d497c6..672d890 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
@@ -48,12 +48,14 @@ foreach my $i (@$cp950txt)
 		&& $code <= 0xf9dc)
 	{
 		push @$all,
-		  { code      => $code,
+		  {
+			code      => $code,
 			ucs       => $ucs,
 			comment   => $i->{comment},
 			direction => BOTH,
 			f         => $i->{f},
-			l         => $i->{l} };
+			l         => $i->{l}
+		  };
 	}
 }
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 2971e64..0d3184c 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -40,8 +40,11 @@ while (<$in>)
 	next if (($code & 0xFF) < 0xA1);
 	next
 	  if (
-		!(     $code >= 0xA100 && $code <= 0xA9FF
-			|| $code >= 0xB000 && $code <= 0xF7FF));
+		!(
+			   $code >= 0xA100 && $code <= 0xA9FF
+			|| $code >= 0xB000 && $code <= 0xF7FF
+		)
+	  );
 
 	next if ($code >= 0xA2A1 && $code <= 0xA2B0);
 	next if ($code >= 0xA2E3 && $code <= 0xA2E4);
@@ -70,11 +73,13 @@ while (<$in>)
 	}
 
 	push @mapping,
-	  { ucs       => $ucs,
+	  {
+		ucs       => $ucs,
 		code      => $code,
 		direction => BOTH,
 		f         => $in_file,
-		l         => $. };
+		l         => $.
+	  };
 }
 close($in);
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
index 1c1152e..9ad7dd0 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
@@ -33,13 +33,15 @@ while (my $line = <$in>)
 		my $ucs2 = hex($u2);
 
 		push @all,
-		  { direction  => BOTH,
+		  {
+			direction  => BOTH,
 			ucs        => $ucs1,
 			ucs_second => $ucs2,
 			code       => $code,
 			comment    => $rest,
 			f          => $in_file,
-			l          => $. };
+			l          => $.
+		  };
 	}
 	elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
 	{
@@ -52,12 +54,14 @@ while (my $line = <$in>)
 		next if ($code < 0x80 && $ucs < 0x80);
 
 		push @all,
-		  { direction => BOTH,
+		  {
+			direction => BOTH,
 			ucs       => $ucs,
 			code      => $code,
 			comment   => $rest,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
index 0ac1f17..dbca4ce 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
@@ -69,7 +69,8 @@ foreach my $i (@$ct932)
 		$i->{code} = $jis | (
 			$jis < 0x100
 			? 0x8e00
-			: ($sjis >= 0xeffd ? 0x8f8080 : 0x8080));
+			: ($sjis >= 0xeffd ? 0x8f8080 : 0x8080)
+		);
 
 		# Remember the SJIS code for later.
 		$i->{sjis} = $sjis;
@@ -115,352 +116,525 @@ foreach my $i (@mapping)
 }
 
 push @mapping, (
-	{   direction => BOTH,
+	{
+		direction => BOTH,
 		ucs       => 0x4efc,
 		code      => 0x8ff4af,
-		comment   => '# CJK(4EFC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(4EFC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x50f4,
 		code      => 0x8ff4b0,
-		comment   => '# CJK(50F4)' },
-	{   direction => BOTH,
+		comment   => '# CJK(50F4)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x51EC,
 		code      => 0x8ff4b1,
-		comment   => '# CJK(51EC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(51EC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5307,
 		code      => 0x8ff4b2,
-		comment   => '# CJK(5307)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5307)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5324,
 		code      => 0x8ff4b3,
-		comment   => '# CJK(5324)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5324)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x548A,
 		code      => 0x8ff4b5,
-		comment   => '# CJK(548A)' },
-	{   direction => BOTH,
+		comment   => '# CJK(548A)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5759,
 		code      => 0x8ff4b6,
-		comment   => '# CJK(5759)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5759)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x589E,
 		code      => 0x8ff4b9,
-		comment   => '# CJK(589E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(589E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5BEC,
 		code      => 0x8ff4ba,
-		comment   => '# CJK(5BEC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5BEC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5CF5,
 		code      => 0x8ff4bb,
-		comment   => '# CJK(5CF5)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5CF5)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5D53,
 		code      => 0x8ff4bc,
-		comment   => '# CJK(5D53)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5D53)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5FB7,
 		code      => 0x8ff4be,
-		comment   => '# CJK(5FB7)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5FB7)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6085,
 		code      => 0x8ff4bf,
-		comment   => '# CJK(6085)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6085)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6120,
 		code      => 0x8ff4c0,
-		comment   => '# CJK(6120)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6120)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x654E,
 		code      => 0x8ff4c1,
-		comment   => '# CJK(654E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(654E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x663B,
 		code      => 0x8ff4c2,
-		comment   => '# CJK(663B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(663B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6665,
 		code      => 0x8ff4c3,
-		comment   => '# CJK(6665)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6665)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6801,
 		code      => 0x8ff4c6,
-		comment   => '# CJK(6801)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6801)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6A6B,
 		code      => 0x8ff4c9,
-		comment   => '# CJK(6A6B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6A6B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6AE2,
 		code      => 0x8ff4ca,
-		comment   => '# CJK(6AE2)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6AE2)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6DF2,
 		code      => 0x8ff4cc,
-		comment   => '# CJK(6DF2)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6DF2)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6DF8,
 		code      => 0x8ff4cb,
-		comment   => '# CJK(6DF8)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6DF8)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7028,
 		code      => 0x8ff4cd,
-		comment   => '# CJK(7028)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7028)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x70BB,
 		code      => 0x8ff4ae,
-		comment   => '# CJK(70BB)' },
-	{   direction => BOTH,
+		comment   => '# CJK(70BB)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7501,
 		code      => 0x8ff4d0,
-		comment   => '# CJK(7501)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7501)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7682,
 		code      => 0x8ff4d1,
-		comment   => '# CJK(7682)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7682)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x769E,
 		code      => 0x8ff4d2,
-		comment   => '# CJK(769E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(769E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7930,
 		code      => 0x8ff4d4,
-		comment   => '# CJK(7930)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7930)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7AE7,
 		code      => 0x8ff4d9,
-		comment   => '# CJK(7AE7)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7AE7)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7DA0,
 		code      => 0x8ff4dc,
-		comment   => '# CJK(7DA0)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7DA0)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7DD6,
 		code      => 0x8ff4dd,
-		comment   => '# CJK(7DD6)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7DD6)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8362,
 		code      => 0x8ff4df,
-		comment   => '# CJK(8362)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8362)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x85B0,
 		code      => 0x8ff4e1,
-		comment   => '# CJK(85B0)' },
-	{   direction => BOTH,
+		comment   => '# CJK(85B0)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8807,
 		code      => 0x8ff4e4,
-		comment   => '# CJK(8807)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8807)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8B7F,
 		code      => 0x8ff4e6,
-		comment   => '# CJK(8B7F)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8B7F)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8CF4,
 		code      => 0x8ff4e7,
-		comment   => '# CJK(8CF4)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8CF4)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8D76,
 		code      => 0x8ff4e8,
-		comment   => '# CJK(8D76)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8D76)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x90DE,
 		code      => 0x8ff4ec,
-		comment   => '# CJK(90DE)' },
-	{   direction => BOTH,
+		comment   => '# CJK(90DE)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9115,
 		code      => 0x8ff4ee,
-		comment   => '# CJK(9115)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9115)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9592,
 		code      => 0x8ff4f1,
-		comment   => '# CJK(9592)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9592)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x973B,
 		code      => 0x8ff4f4,
-		comment   => '# CJK(973B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(973B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x974D,
 		code      => 0x8ff4f5,
-		comment   => '# CJK(974D)' },
-	{   direction => BOTH,
+		comment   => '# CJK(974D)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9751,
 		code      => 0x8ff4f6,
-		comment   => '# CJK(9751)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9751)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x999E,
 		code      => 0x8ff4fa,
-		comment   => '# CJK(999E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(999E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9AD9,
 		code      => 0x8ff4fb,
-		comment   => '# CJK(9AD9)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9AD9)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9B72,
 		code      => 0x8ff4fc,
-		comment   => '# CJK(9B72)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9B72)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9ED1,
 		code      => 0x8ff4fe,
-		comment   => '# CJK(9ED1)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9ED1)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xF929,
 		code      => 0x8ff4c5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F929' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F929'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xF9DC,
 		code      => 0x8ff4f2,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F9DC' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F9DC'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA0E,
 		code      => 0x8ff4b4,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0E' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0E'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA0F,
 		code      => 0x8ff4b7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0F' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0F'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA10,
 		code      => 0x8ff4b8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA10' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA10'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA11,
 		code      => 0x8ff4bd,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA11' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA11'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA12,
 		code      => 0x8ff4c4,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA12' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA12'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA13,
 		code      => 0x8ff4c7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA13' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA13'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA14,
 		code      => 0x8ff4c8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA14' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA14'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA15,
 		code      => 0x8ff4ce,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA15' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA15'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA16,
 		code      => 0x8ff4cf,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA16' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA16'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA17,
 		code      => 0x8ff4d3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA17' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA17'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA18,
 		code      => 0x8ff4d5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA18' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA18'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA19,
 		code      => 0x8ff4d6,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA19' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA19'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1A,
 		code      => 0x8ff4d7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1A' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1A'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1B,
 		code      => 0x8ff4d8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1B' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1B'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1C,
 		code      => 0x8ff4da,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1C' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1C'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1D,
 		code      => 0x8ff4db,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1D' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1D'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1E,
 		code      => 0x8ff4de,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1E' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1E'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1F,
 		code      => 0x8ff4e0,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1F' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1F'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA20,
 		code      => 0x8ff4e2,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA20' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA20'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA21,
 		code      => 0x8ff4e3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA21' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA21'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA22,
 		code      => 0x8ff4e5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA22' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA22'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA23,
 		code      => 0x8ff4e9,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA23' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA23'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA24,
 		code      => 0x8ff4ea,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA24' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA24'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA25,
 		code      => 0x8ff4eb,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA25' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA25'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA26,
 		code      => 0x8ff4ed,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA26' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA26'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA27,
 		code      => 0x8ff4ef,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA27' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA27'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA28,
 		code      => 0x8ff4f0,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA28' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA28'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA29,
 		code      => 0x8ff4f3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA29' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA29'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2A,
 		code      => 0x8ff4f7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2A' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2A'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2B,
 		code      => 0x8ff4f8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2B' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2B'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2C,
 		code      => 0x8ff4f9,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2C' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2C'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2D,
 		code      => 0x8ff4fd,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2D' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2D'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFF07,
 		code      => 0x8ff4a9,
-		comment   => '# FULLWIDTH APOSTROPHE' },
-	{   direction => BOTH,
+		comment   => '# FULLWIDTH APOSTROPHE'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFFE4,
 		code      => 0x8fa2c3,
-		comment   => '# FULLWIDTH BROKEN BAR' },
+		comment   => '# FULLWIDTH BROKEN BAR'
+	},
 
 	# additional conversions for EUC_JP -> UTF-8 conversion
-	{   direction => TO_UNICODE,
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x2116,
 		code      => 0x8ff4ac,
-		comment   => '# NUMERO SIGN' },
-	{   direction => TO_UNICODE,
+		comment   => '# NUMERO SIGN'
+	},
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x2121,
 		code      => 0x8ff4ad,
-		comment   => '# TELEPHONE SIGN' },
-	{   direction => TO_UNICODE,
+		comment   => '# TELEPHONE SIGN'
+	},
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x3231,
 		code      => 0x8ff4ab,
-		comment   => '# PARENTHESIZED IDEOGRAPH STOCK' });
+		comment   => '# PARENTHESIZED IDEOGRAPH STOCK'
+	}
+);
 
 print_conversion_tables($this_script, "EUC_JP", \@mapping);
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
index 4d6a3ca..e0045ab 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
@@ -32,23 +32,31 @@ foreach my $i (@$mapping)
 
 # Some extra characters that are not in KSX1001.TXT
 push @$mapping,
-  ( {   direction => BOTH,
+  (
+	{
+		direction => BOTH,
 		ucs       => 0x20AC,
 		code      => 0xa2e6,
 		comment   => '# EURO SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x00AE,
 		code      => 0xa2e7,
 		comment   => '# REGISTERED SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x327E,
 		code      => 0xa2e8,
 		comment   => '# CIRCLED HANGUL IEUNG U',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	}
+  );
 
 print_conversion_tables($this_script, "EUC_KR", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
index 89f3cd7..98d4156d 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
@@ -53,12 +53,14 @@ foreach my $i (@$mapping)
 	if ($origcode >= 0x12121 && $origcode <= 0x20000)
 	{
 		push @extras,
-		  { ucs       => $i->{ucs},
+		  {
+			ucs       => $i->{ucs},
 			code      => ($i->{code} + 0x8ea10000),
 			rest      => $i->{rest},
 			direction => TO_UNICODE,
 			f         => $i->{f},
-			l         => $i->{l} };
+			l         => $i->{l}
+		  };
 	}
 }
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
index ec184d7..65ffee3 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
@@ -36,11 +36,13 @@ while (<$in>)
 	if ($code >= 0x80 && $ucs >= 0x0080)
 	{
 		push @mapping,
-		  { ucs       => $ucs,
+		  {
+			ucs       => $ucs,
 			code      => $code,
 			direction => BOTH,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
index b580373..ee55961 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
@@ -26,23 +26,31 @@ my $mapping = &read_source("JOHAB.TXT");
 
 # Some extra characters that are not in JOHAB.TXT
 push @$mapping,
-  ( {   direction => BOTH,
+  (
+	{
+		direction => BOTH,
 		ucs       => 0x20AC,
 		code      => 0xd9e6,
 		comment   => '# EURO SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x00AE,
 		code      => 0xd9e7,
 		comment   => '# REGISTERED SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x327E,
 		code      => 0xd9e8,
 		comment   => '# CIRCLED HANGUL IEUNG U',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	}
+  );
 
 print_conversion_tables($this_script, "JOHAB", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
index d153e4c..bb84458 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
@@ -33,13 +33,15 @@ while (my $line = <$in>)
 		my $ucs2 = hex($u2);
 
 		push @mapping,
-		  { code       => $code,
+		  {
+			code       => $code,
 			ucs        => $ucs1,
 			ucs_second => $ucs2,
 			comment    => $rest,
 			direction  => BOTH,
 			f          => $in_file,
-			l          => $. };
+			l          => $.
+		  };
 	}
 	elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
 	{
@@ -68,12 +70,14 @@ while (my $line = <$in>)
 		}
 
 		push @mapping,
-		  { code      => $code,
+		  {
+			code      => $code,
 			ucs       => $ucs,
 			comment   => $rest,
 			direction => $direction,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
index a50f6f3..b8ef979 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
@@ -21,7 +21,8 @@ my $mapping = read_source("CP932.TXT");
 my @reject_sjis = (
 	0xed40 .. 0xeefc, 0x8754 .. 0x875d, 0x878a, 0x8782,
 	0x8784,           0xfa5b,           0xfa54, 0x8790 .. 0x8792,
-	0x8795 .. 0x8797, 0x879a .. 0x879c);
+	0x8795 .. 0x8797, 0x879a .. 0x879c
+);
 
 foreach my $i (@$mapping)
 {
@@ -36,53 +37,71 @@ foreach my $i (@$mapping)
 
 # Add these UTF8->SJIS pairs to the table.
 push @$mapping,
-  ( {   direction => FROM_UNICODE,
+  (
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00a2,
 		code      => 0x8191,
 		comment   => '# CENT SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00a3,
 		code      => 0x8192,
 		comment   => '# POUND SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00a5,
 		code      => 0x5c,
 		comment   => '# YEN SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00ac,
 		code      => 0x81ca,
 		comment   => '# NOT SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x2016,
 		code      => 0x8161,
 		comment   => '# DOUBLE VERTICAL LINE',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x203e,
 		code      => 0x7e,
 		comment   => '# OVERLINE',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x2212,
 		code      => 0x817c,
 		comment   => '# MINUS SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x301c,
 		code      => 0x8160,
 		comment   => '# WAVE DASH',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	}
+  );
 
 print_conversion_tables($this_script, "SJIS", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
index dc9fb75..4231aaf 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
@@ -39,22 +39,26 @@ while (<$in>)
 	if ($code >= 0x80 && $ucs >= 0x0080)
 	{
 		push @mapping,
-		  { ucs       => $ucs,
+		  {
+			ucs       => $ucs,
 			code      => $code,
 			direction => BOTH,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
 
 # One extra character that's not in the source file.
 push @mapping,
-  { direction => BOTH,
+  {
+	direction => BOTH,
 	code      => 0xa2e8,
 	ucs       => 0x327e,
 	comment   => 'CIRCLED HANGUL IEUNG U',
 	f         => $this_script,
-	l         => __LINE__ };
+	l         => __LINE__
+  };
 
 print_conversion_tables($this_script, "UHC", \@mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl
index 4453449..e302250 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_most.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl
@@ -47,7 +47,8 @@ my %filename = (
 	'ISO8859_16' => '8859-16.TXT',
 	'KOI8R'      => 'KOI8-R.TXT',
 	'KOI8U'      => 'KOI8-U.TXT',
-	'GBK'        => 'CP936.TXT');
+	'GBK'        => 'CP936.TXT'
+);
 
 # make maps for all encodings if not specified
 my @charsets = (scalar(@ARGV) > 0) ? @ARGV : sort keys(%filename);
diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm
index 03151fa..12d279f 100644
--- a/src/backend/utils/mb/Unicode/convutils.pm
+++ b/src/backend/utils/mb/Unicode/convutils.pm
@@ -18,7 +18,8 @@ use constant {
 	NONE         => 0,
 	TO_UNICODE   => 1,
 	FROM_UNICODE => 2,
-	BOTH         => 3 };
+	BOTH         => 3
+};
 
 #######################################################################
 # read_source - common routine to read source file
@@ -56,7 +57,8 @@ sub read_source
 			comment   => $4,
 			direction => BOTH,
 			f         => $fname,
-			l         => $. };
+			l         => $.
+		};
 
 		# Ignore pure ASCII mappings. PostgreSQL character conversion code
 		# never even passes these to the conversion code.
@@ -370,9 +372,11 @@ sub print_radix_table
 	}
 
 	unshift @segments,
-	  { header  => "Dummy map, for invalid values",
+	  {
+		header  => "Dummy map, for invalid values",
 		min_idx => 0,
-		max_idx => $widest_range };
+		max_idx => $widest_range
+	  };
 
 	###
 	### Eliminate overlapping zeros
@@ -395,7 +399,8 @@ sub print_radix_table
 		for (
 			my $i = $seg->{max_idx};
 			$i >= $seg->{min_idx} && !$seg->{values}->{$i};
-			$i--)
+			$i--
+		  )
 		{
 			$this_trail_zeros++;
 		}
@@ -405,7 +410,8 @@ sub print_radix_table
 		for (
 			my $i = $nextseg->{min_idx};
 			$i <= $nextseg->{max_idx} && !$nextseg->{values}->{$i};
-			$i++)
+			$i++
+		  )
 		{
 			$next_lead_zeros++;
 		}
@@ -655,12 +661,14 @@ sub build_segments_recurse
 	if ($level == $depth)
 	{
 		push @segments,
-		  { header => $header . ", leaf: ${path}xx",
+		  {
+			header => $header . ", leaf: ${path}xx",
 			label  => $label,
 			level  => $level,
 			depth  => $depth,
 			path   => $path,
-			values => $map };
+			values => $map
+		  };
 	}
 	else
 	{
@@ -678,12 +686,14 @@ sub build_segments_recurse
 		}
 
 		push @segments,
-		  { header => $header . ", byte #$level: ${path}xx",
+		  {
+			header => $header . ", byte #$level: ${path}xx",
 			label  => $label,
 			level  => $level,
 			depth  => $depth,
 			path   => $path,
-			values => \%children };
+			values => \%children
+		  };
 	}
 	return @segments;
 }
@@ -776,7 +786,8 @@ sub make_charmap_combined
 				code        => $c->{code},
 				comment     => $c->{comment},
 				f           => $c->{f},
-				l           => $c->{l} };
+				l           => $c->{l}
+			};
 			push @combined, $entry;
 		}
 	}
diff --git a/src/bin/initdb/t/001_initdb.pl b/src/bin/initdb/t/001_initdb.pl
index 599460c..4fa4f94 100644
--- a/src/bin/initdb/t/001_initdb.pl
+++ b/src/bin/initdb/t/001_initdb.pl
@@ -25,15 +25,18 @@ mkdir $xlogdir;
 mkdir "$xlogdir/lost+found";
 command_fails(
 	[ 'initdb', '-X', $xlogdir, $datadir ],
-	'existing nonempty xlog directory');
+	'existing nonempty xlog directory'
+);
 rmdir "$xlogdir/lost+found";
 command_fails(
 	[ 'initdb', '-X', 'pgxlog', $datadir ],
-	'relative xlog directory not allowed');
+	'relative xlog directory not allowed'
+);
 
 command_fails(
 	[ 'initdb', '-U', 'pg_test', $datadir ],
-	'role names cannot begin with "pg_"');
+	'role names cannot begin with "pg_"'
+);
 
 mkdir $datadir;
 
@@ -72,7 +75,8 @@ SKIP:
 
 	command_ok(
 		[ 'initdb', '-g', $datadir_group ],
-		'successful creation with group access');
+		'successful creation with group access'
+	);
 
 	ok(check_mode_recursive($datadir_group, 0750, 0640),
 		'check PGDATA permissions');
diff --git a/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
index fdedd2f..4c5b855 100644
--- a/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
+++ b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
@@ -11,7 +11,8 @@ my $tempdir = TestLib::tempdir;
 
 my @walfiles = (
 	'00000001000000370000000C.gz', '00000001000000370000000D',
-	'00000001000000370000000E',    '00000001000000370000000F.partial',);
+	'00000001000000370000000E',    '00000001000000370000000F.partial',
+);
 
 sub create_files
 {
@@ -28,27 +29,32 @@ create_files();
 command_fails_like(
 	['pg_archivecleanup'],
 	qr/must specify archive location/,
-	'fails if archive location is not specified');
+	'fails if archive location is not specified'
+);
 
 command_fails_like(
 	[ 'pg_archivecleanup', $tempdir ],
 	qr/must specify oldest kept WAL file/,
-	'fails if oldest kept WAL file name is not specified');
+	'fails if oldest kept WAL file name is not specified'
+);
 
 command_fails_like(
 	[ 'pg_archivecleanup', 'notexist', 'foo' ],
 	qr/archive location .* does not exist/,
-	'fails if archive location does not exist');
+	'fails if archive location does not exist'
+);
 
 command_fails_like(
 	[ 'pg_archivecleanup', $tempdir, 'foo', 'bar' ],
 	qr/too many command-line arguments/,
-	'fails with too many command-line arguments');
+	'fails with too many command-line arguments'
+);
 
 command_fails_like(
 	[ 'pg_archivecleanup', $tempdir, 'foo' ],
 	qr/invalid file name argument/,
-	'fails with invalid restart file name');
+	'fails with invalid restart file name'
+);
 
 {
 	# like command_like but checking stderr
@@ -59,7 +65,8 @@ command_fails_like(
 	like(
 		$stderr,
 		qr/$walfiles[1].*would be removed/,
-		"pg_archivecleanup dry run: matches");
+		"pg_archivecleanup dry run: matches"
+	);
 	foreach my $fn (@walfiles)
 	{
 		ok(-f "$tempdir/$fn", "$fn not removed");
@@ -73,9 +80,12 @@ sub run_check
 	create_files();
 
 	command_ok(
-		[   'pg_archivecleanup', '-x', '.gz', $tempdir,
-			$walfiles[2] . $suffix ],
-		"$test_name: runs");
+		[
+			'pg_archivecleanup', '-x', '.gz', $tempdir,
+			$walfiles[2] . $suffix
+		],
+		"$test_name: runs"
+	);
 
 	ok(!-f "$tempdir/$walfiles[0]",
 		"$test_name: first older WAL file was cleaned up");
diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
index d7ab36b..05b0bbc 100644
--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl
+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
@@ -40,7 +40,8 @@ system_or_bail 'pg_ctl', '-D', $pgdata, 'reload';
 
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup" ],
-	'pg_basebackup fails because of WAL configuration');
+	'pg_basebackup fails because of WAL configuration'
+);
 
 ok(!-d "$tempdir/backup", 'backup directory was cleaned up');
 
@@ -119,7 +120,8 @@ SKIP:
 is_deeply(
 	[ sort(slurp_dir("$tempdir/backup/pg_wal/")) ],
 	[ sort qw(. .. archive_status) ],
-	'no WAL files copied');
+	'no WAL files copied'
+);
 
 # Contents of these directories should not be copied.
 foreach my $dirname (
@@ -129,13 +131,15 @@ foreach my $dirname (
 	is_deeply(
 		[ sort(slurp_dir("$tempdir/backup/$dirname/")) ],
 		[ sort qw(. ..) ],
-		"contents of $dirname/ not copied");
+		"contents of $dirname/ not copied"
+	);
 }
 
 # These files should not be copied.
 foreach my $filename (
 	qw(postgresql.auto.conf.tmp postmaster.opts postmaster.pid tablespace_map current_logfiles.tmp
-	global/pg_internal.init))
+	global/pg_internal.init)
+  )
 {
 	ok(!-f "$tempdir/backup/$filename", "$filename not copied");
 }
@@ -143,14 +147,18 @@ foreach my $filename (
 # Unlogged relation forks other than init should not be copied
 ok(-f "$tempdir/backup/${baseUnloggedPath}_init",
 	'unlogged init fork in backup');
-ok( !-f "$tempdir/backup/$baseUnloggedPath",
-	'unlogged main fork not in backup');
+ok(
+	!-f "$tempdir/backup/$baseUnloggedPath",
+	'unlogged main fork not in backup'
+);
 
 # Temp relations should not be copied.
 foreach my $filename (@tempRelationFiles)
 {
-	ok( !-f "$tempdir/backup/base/$postgresOid/$filename",
-		"base/$postgresOid/$filename not copied");
+	ok(
+		!-f "$tempdir/backup/base/$postgresOid/$filename",
+		"base/$postgresOid/$filename not copied"
+	);
 }
 
 # Make sure existing backup_label was ignored.
@@ -159,9 +167,12 @@ isnt(slurp_file("$tempdir/backup/backup_label"),
 rmtree("$tempdir/backup");
 
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backup2", '--waldir',
-		"$tempdir/xlog2" ],
-	'separate xlog directory');
+	[
+		'pg_basebackup', '-D', "$tempdir/backup2", '--waldir',
+		"$tempdir/xlog2"
+	],
+	'separate xlog directory'
+);
 ok(-f "$tempdir/backup2/PG_VERSION", 'backup was created');
 ok(-d "$tempdir/xlog2/",             'xlog directory was created');
 rmtree("$tempdir/backup2");
@@ -179,9 +190,12 @@ $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp', "-T/foo=" ],
 	'-T with empty new directory fails');
 $node->command_fails(
-	[   'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp',
-		"-T/foo=/bar=/baz" ],
-	'-T with multiple = fails');
+	[
+		'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp',
+		"-T/foo=/bar=/baz"
+	],
+	'-T with multiple = fails'
+);
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp', "-Tfoo=/bar" ],
 	'-T with old directory not absolute fails');
@@ -254,8 +268,10 @@ SKIP:
 		q{select pg_relation_filepath('tblspc1_unlogged')});
 
 	# Make sure main and init forks exist
-	ok( -f "$pgdata/${tblspc1UnloggedPath}_init",
-		'unlogged init fork in tablespace');
+	ok(
+		-f "$pgdata/${tblspc1UnloggedPath}_init",
+		'unlogged init fork in tablespace'
+	);
 	ok(-f "$pgdata/$tblspc1UnloggedPath", 'unlogged main fork in tablespace');
 
 	# Create files that look like temporary relations to ensure they are ignored
@@ -265,7 +281,11 @@ SKIP:
 		dirname(
 			dirname(
 				$node->safe_psql(
-					'postgres', q{select pg_relation_filepath('test1')}))));
+					'postgres', q{select pg_relation_filepath('test1')}
+				)
+			)
+		)
+	);
 
 	foreach my $filename (@tempRelationFiles)
 	{
@@ -276,20 +296,28 @@ SKIP:
 
 	$node->command_fails(
 		[ 'pg_basebackup', '-D', "$tempdir/backup1", '-Fp' ],
-		'plain format with tablespaces fails without tablespace mapping');
+		'plain format with tablespaces fails without tablespace mapping'
+	);
 
 	$node->command_ok(
-		[   'pg_basebackup', '-D', "$tempdir/backup1", '-Fp',
-			"-T$shorter_tempdir/tblspc1=$tempdir/tbackup/tblspc1" ],
-		'plain format with tablespaces succeeds with tablespace mapping');
+		[
+			'pg_basebackup', '-D', "$tempdir/backup1", '-Fp',
+			"-T$shorter_tempdir/tblspc1=$tempdir/tbackup/tblspc1"
+		],
+		'plain format with tablespaces succeeds with tablespace mapping'
+	);
 	ok(-d "$tempdir/tbackup/tblspc1", 'tablespace was relocated');
 	opendir(my $dh, "$pgdata/pg_tblspc") or die;
-	ok( (   grep {
+	ok(
+		(
+			grep {
 				-l "$tempdir/backup1/pg_tblspc/$_"
 				  and readlink "$tempdir/backup1/pg_tblspc/$_" eq
 				  "$tempdir/tbackup/tblspc1"
-			} readdir($dh)),
-		"tablespace symlink was updated");
+			} readdir($dh)
+		),
+		"tablespace symlink was updated"
+	);
 	closedir $dh;
 
 	# Group access should be enabled on all backup files
@@ -308,8 +336,10 @@ SKIP:
 	# Temp relations should not be copied.
 	foreach my $filename (@tempRelationFiles)
 	{
-		ok( !-f "$tempdir/tbackup/tblspc1/$tblSpc1Id/$postgresOid/$filename",
-			"[tblspc1]/$postgresOid/$filename not copied");
+		ok(
+			!-f "$tempdir/tbackup/tblspc1/$tblSpc1Id/$postgresOid/$filename",
+			"[tblspc1]/$postgresOid/$filename not copied"
+		);
 
 		# Also remove temp relation files or tablespace drop will fail.
 		my $filepath =
@@ -319,8 +349,10 @@ SKIP:
 		  or BAIL_OUT("unable to unlink $filepath");
 	}
 
-	ok( -d "$tempdir/backup1/pg_replslot",
-		'pg_replslot symlink copied as directory');
+	ok(
+		-d "$tempdir/backup1/pg_replslot",
+		'pg_replslot symlink copied as directory'
+	);
 	rmtree("$tempdir/backup1");
 
 	mkdir "$tempdir/tbl=spc2";
@@ -330,9 +362,12 @@ SKIP:
 	$node->safe_psql('postgres',
 		"CREATE TABLESPACE tblspc2 LOCATION '$shorter_tempdir/tbl=spc2';");
 	$node->command_ok(
-		[   'pg_basebackup', '-D', "$tempdir/backup3", '-Fp',
-			"-T$shorter_tempdir/tbl\\=spc2=$tempdir/tbackup/tbl\\=spc2" ],
-		'mapping tablespace with = sign in path');
+		[
+			'pg_basebackup', '-D', "$tempdir/backup3", '-Fp',
+			"-T$shorter_tempdir/tbl\\=spc2=$tempdir/tbackup/tbl\\=spc2"
+		],
+		'mapping tablespace with = sign in path'
+	);
 	ok(-d "$tempdir/tbackup/tbl=spc2",
 		'tablespace with = sign was relocated');
 	$node->safe_psql('postgres', "DROP TABLESPACE tblspc2;");
@@ -358,15 +393,18 @@ my $port = $node->port;
 like(
 	$recovery_conf,
 	qr/^standby_mode = 'on'\n/m,
-	'recovery.conf sets standby_mode');
+	'recovery.conf sets standby_mode'
+);
 like(
 	$recovery_conf,
 	qr/^primary_conninfo = '.*port=$port.*'\n/m,
-	'recovery.conf sets primary_conninfo');
+	'recovery.conf sets primary_conninfo'
+);
 
 $node->command_ok(
 	[ 'pg_basebackup', '-D', "$tempdir/backupxd" ],
-	'pg_basebackup runs in default xlog mode');
+	'pg_basebackup runs in default xlog mode'
+);
 ok(grep(/^[0-9A-F]{24}$/, slurp_dir("$tempdir/backupxd/pg_wal")),
 	'WAL files copied');
 rmtree("$tempdir/backupxd");
@@ -389,52 +427,66 @@ $node->command_ok(
 ok(-f "$tempdir/backupxst/pg_wal.tar", "tar file was created");
 rmtree("$tempdir/backupxst");
 $node->command_ok(
-	[   'pg_basebackup',         '-D',
+	[
+		'pg_basebackup',         '-D',
 		"$tempdir/backupnoslot", '-X',
-		'stream',                '--no-slot' ],
-	'pg_basebackup -X stream runs with --no-slot');
+		'stream',                '--no-slot'
+	],
+	'pg_basebackup -X stream runs with --no-slot'
+);
 rmtree("$tempdir/backupnoslot");
 
 $node->command_fails(
-	[   'pg_basebackup',             '-D',
+	[
+		'pg_basebackup',             '-D',
 		"$tempdir/backupxs_sl_fail", '-X',
 		'stream',                    '-S',
-		'slot0' ],
-	'pg_basebackup fails with nonexistent replication slot');
+		'slot0'
+	],
+	'pg_basebackup fails with nonexistent replication slot'
+);
 
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backupxs_slot", '-C' ],
 	'pg_basebackup -C fails without slot name');
 
 $node->command_fails(
-	[   'pg_basebackup',          '-D',
+	[
+		'pg_basebackup',          '-D',
 		"$tempdir/backupxs_slot", '-C',
 		'-S',                     'slot0',
-		'--no-slot' ],
-	'pg_basebackup fails with -C -S --no-slot');
+		'--no-slot'
+	],
+	'pg_basebackup fails with -C -S --no-slot'
+);
 
 $node->command_ok(
 	[ 'pg_basebackup', '-D', "$tempdir/backupxs_slot", '-C', '-S', 'slot0' ],
-	'pg_basebackup -C runs');
+	'pg_basebackup -C runs'
+);
 rmtree("$tempdir/backupxs_slot");
 
-is( $node->safe_psql(
+is(
+	$node->safe_psql(
 		'postgres',
 		q{SELECT slot_name FROM pg_replication_slots WHERE slot_name = 'slot0'}
 	),
 	'slot0',
-	'replication slot was created');
+	'replication slot was created'
+);
 isnt(
 	$node->safe_psql(
 		'postgres',
 		q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot0'}
 	),
 	'',
-	'restart LSN of new slot is not null');
+	'restart LSN of new slot is not null'
+);
 
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backupxs_slot1", '-C', '-S', 'slot0' ],
-	'pg_basebackup fails with -C -S and a previously existing slot');
+	'pg_basebackup fails with -C -S and a previously existing slot'
+);
 
 $node->safe_psql('postgres',
 	q{SELECT * FROM pg_create_physical_replication_slot('slot1')});
@@ -444,11 +496,15 @@ my $lsn = $node->safe_psql('postgres',
 is($lsn, '', 'restart LSN of new slot is null');
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/fail", '-S', 'slot1', '-X', 'none' ],
-	'pg_basebackup with replication slot fails without WAL streaming');
+	'pg_basebackup with replication slot fails without WAL streaming'
+);
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X',
-		'stream',        '-S', 'slot1' ],
-	'pg_basebackup -X stream with replication slot runs');
+	[
+		'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X',
+		'stream',        '-S', 'slot1'
+	],
+	'pg_basebackup -X stream with replication slot runs'
+);
 $lsn = $node->safe_psql('postgres',
 	q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot1'}
 );
@@ -456,13 +512,17 @@ like($lsn, qr!^0/[0-9A-Z]{7,8}$!, 'restart LSN of slot has advanced');
 rmtree("$tempdir/backupxs_sl");
 
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backupxs_sl_R", '-X',
-		'stream',        '-S', 'slot1',                  '-R' ],
-	'pg_basebackup with replication slot and -R runs');
+	[
+		'pg_basebackup', '-D', "$tempdir/backupxs_sl_R", '-X',
+		'stream',        '-S', 'slot1',                  '-R'
+	],
+	'pg_basebackup with replication slot and -R runs'
+);
 like(
 	slurp_file("$tempdir/backupxs_sl_R/recovery.conf"),
 	qr/^primary_slot_name = 'slot1'\n/m,
-	'recovery.conf sets primary_slot_name');
+	'recovery.conf sets primary_slot_name'
+);
 
 my $checksum = $node->safe_psql('postgres', 'SHOW data_checksums;');
 is($checksum, 'on', 'checksums are enabled');
@@ -493,7 +553,8 @@ $node->command_checks_all(
 	1,
 	[qr{^$}],
 	[qr/^WARNING.*checksum verification failed/s],
-	'pg_basebackup reports checksum mismatch');
+	'pg_basebackup reports checksum mismatch'
+);
 rmtree("$tempdir/backup_corrupt");
 
 # induce further corruption in 5 more blocks
@@ -513,7 +574,8 @@ $node->command_checks_all(
 	1,
 	[qr{^$}],
 	[qr/^WARNING.*further.*failures.*will.not.be.reported/s],
-	'pg_basebackup does not report more than 5 checksum mismatches');
+	'pg_basebackup does not report more than 5 checksum mismatches'
+);
 rmtree("$tempdir/backup_corrupt2");
 
 # induce corruption in a second file
@@ -529,13 +591,15 @@ $node->command_checks_all(
 	1,
 	[qr{^$}],
 	[qr/^WARNING.*7 total checksum verification failures/s],
-	'pg_basebackup correctly report the total number of checksum mismatches');
+	'pg_basebackup correctly report the total number of checksum mismatches'
+);
 rmtree("$tempdir/backup_corrupt3");
 
 # do not verify checksums, should return ok
 $node->command_ok(
 	[ 'pg_basebackup', '-D', "$tempdir/backup_corrupt4", '-k' ],
-	'pg_basebackup with -k does not report checksum mismatch');
+	'pg_basebackup with -k does not report checksum mismatch'
+);
 rmtree("$tempdir/backup_corrupt4");
 
 $node->safe_psql('postgres', "DROP TABLE corrupt1;");
diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
index 0793f9c..2ef237f 100644
--- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl
+++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
@@ -23,10 +23,12 @@ $primary->command_fails(['pg_receivewal'],
 	'pg_receivewal needs target directory specified');
 $primary->command_fails(
 	[ 'pg_receivewal', '-D', $stream_dir, '--create-slot', '--drop-slot' ],
-	'failure if both --create-slot and --drop-slot specified');
+	'failure if both --create-slot and --drop-slot specified'
+);
 $primary->command_fails(
 	[ 'pg_receivewal', '-D', $stream_dir, '--create-slot' ],
-	'failure if --create-slot specified without --slot');
+	'failure if --create-slot specified without --slot'
+);
 $primary->command_fails(
 	[ 'pg_receivewal', '-D', $stream_dir, '--synchronous', '--no-sync' ],
 	'failure if --synchronous specified with --no-sync');
@@ -57,9 +59,12 @@ $primary->psql('postgres',
 
 # Stream up to the given position.
 $primary->command_ok(
-	[   'pg_receivewal', '-D',     $stream_dir,     '--verbose',
-		'--endpos',      $nextlsn, '--synchronous', '--no-loop' ],
-	'streaming some WAL with --synchronous');
+	[
+		'pg_receivewal', '-D',     $stream_dir,     '--verbose',
+		'--endpos',      $nextlsn, '--synchronous', '--no-loop'
+	],
+	'streaming some WAL with --synchronous'
+);
 
 # Permissions on WAL files should be default
 SKIP:
diff --git a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
index e9d0941..c2e4e70 100644
--- a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
+++ b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
@@ -19,7 +19,8 @@ max_replication_slots = 4
 max_wal_senders = 4
 log_min_messages = 'debug1'
 log_error_verbosity = verbose
-});
+}
+);
 $node->dump_info;
 $node->start;
 
@@ -29,16 +30,22 @@ $node->command_fails([ 'pg_recvlogical', '-S', 'test' ],
 $node->command_fails([ 'pg_recvlogical', '-S', 'test', '-d', 'postgres' ],
 	'pg_recvlogical needs an action');
 $node->command_fails(
-	[   'pg_recvlogical',           '-S',
+	[
+		'pg_recvlogical',           '-S',
 		'test',                     '-d',
-		$node->connstr('postgres'), '--start' ],
-	'no destination file');
+		$node->connstr('postgres'), '--start'
+	],
+	'no destination file'
+);
 
 $node->command_ok(
-	[   'pg_recvlogical',           '-S',
+	[
+		'pg_recvlogical',           '-S',
 		'test',                     '-d',
-		$node->connstr('postgres'), '--create-slot' ],
-	'slot created');
+		$node->connstr('postgres'), '--create-slot'
+	],
+	'slot created'
+);
 
 my $slot = $node->slot('test');
 isnt($slot->{'restart_lsn'}, '', 'restart lsn is defined for new slot');
@@ -51,6 +58,9 @@ my $nextlsn =
 chomp($nextlsn);
 
 $node->command_ok(
-	[   'pg_recvlogical', '-S', 'test', '-d', $node->connstr('postgres'),
-		'--start', '--endpos', "$nextlsn", '--no-loop', '-f', '-' ],
-	'replayed a transaction');
+	[
+		'pg_recvlogical', '-S', 'test', '-d', $node->connstr('postgres'),
+		'--start', '--endpos', "$nextlsn", '--no-loop', '-f', '-'
+	],
+	'replayed a transaction'
+);
diff --git a/src/bin/pg_controldata/t/001_pg_controldata.pl b/src/bin/pg_controldata/t/001_pg_controldata.pl
index a9862ae..e871d73 100644
--- a/src/bin/pg_controldata/t/001_pg_controldata.pl
+++ b/src/bin/pg_controldata/t/001_pg_controldata.pl
@@ -33,7 +33,10 @@ close $fh;
 command_checks_all(
 	[ 'pg_controldata', $node->data_dir ],
 	0,
-	[   qr/WARNING: Calculated CRC checksum does not match value stored in file/,
-		qr/WARNING: invalid WAL segment size/ ],
+	[
+		qr/WARNING: Calculated CRC checksum does not match value stored in file/,
+		qr/WARNING: invalid WAL segment size/
+	],
 	[qr/^$/],
-	'pg_controldata with corrupted pg_control');
+	'pg_controldata with corrupted pg_control'
+);
diff --git a/src/bin/pg_ctl/t/001_start_stop.pl b/src/bin/pg_ctl/t/001_start_stop.pl
index 5bbb799..50a57d0 100644
--- a/src/bin/pg_ctl/t/001_start_stop.pl
+++ b/src/bin/pg_ctl/t/001_start_stop.pl
@@ -36,7 +36,8 @@ else
 close $conf;
 my $ctlcmd = [
 	'pg_ctl', 'start', '-D', "$tempdir/data", '-l',
-	"$TestLib::log_path/001_start_stop_server.log" ];
+	"$TestLib::log_path/001_start_stop_server.log"
+];
 if ($Config{osname} ne 'msys')
 {
 	command_like($ctlcmd, qr/done.*server started/s, 'pg_ctl start');
diff --git a/src/bin/pg_ctl/t/003_promote.pl b/src/bin/pg_ctl/t/003_promote.pl
index ecb294b..76bd83d 100644
--- a/src/bin/pg_ctl/t/003_promote.pl
+++ b/src/bin/pg_ctl/t/003_promote.pl
@@ -10,7 +10,8 @@ my $tempdir = TestLib::tempdir;
 command_fails_like(
 	[ 'pg_ctl', '-D', "$tempdir/nonexistent", 'promote' ],
 	qr/directory .* does not exist/,
-	'pg_ctl promote with nonexistent directory');
+	'pg_ctl promote with nonexistent directory'
+);
 
 my $node_primary = get_new_node('primary');
 $node_primary->init(allows_streaming => 1);
@@ -18,14 +19,16 @@ $node_primary->init(allows_streaming => 1);
 command_fails_like(
 	[ 'pg_ctl', '-D', $node_primary->data_dir, 'promote' ],
 	qr/PID file .* does not exist/,
-	'pg_ctl promote of not running instance fails');
+	'pg_ctl promote of not running instance fails'
+);
 
 $node_primary->start;
 
 command_fails_like(
 	[ 'pg_ctl', '-D', $node_primary->data_dir, 'promote' ],
 	qr/not in standby mode/,
-	'pg_ctl promote of primary instance fails');
+	'pg_ctl promote of primary instance fails'
+);
 
 my $node_standby = get_new_node('standby');
 $node_primary->backup('my_backup');
@@ -39,9 +42,12 @@ is($node_standby->safe_psql('postgres', 'SELECT pg_is_in_recovery()'),
 command_ok([ 'pg_ctl', '-D', $node_standby->data_dir, '-W', 'promote' ],
 	'pg_ctl -W promote of standby runs');
 
-ok( $node_standby->poll_query_until(
-		'postgres', 'SELECT NOT pg_is_in_recovery()'),
-	'promoted standby is not in recovery');
+ok(
+	$node_standby->poll_query_until(
+		'postgres', 'SELECT NOT pg_is_in_recovery()'
+	),
+	'promoted standby is not in recovery'
+);
 
 # same again with default wait option
 $node_standby = get_new_node('standby2');
diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 8be5770..6c200aa 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -31,17 +31,20 @@ program_options_handling_ok('pg_dumpall');
 command_fails_like(
 	[ 'pg_dump', 'qqq', 'abc' ],
 	qr/\Qpg_dump: too many command-line arguments (first is "abc")\E/,
-	'pg_dump: too many command-line arguments (first is "asd")');
+	'pg_dump: too many command-line arguments (first is "asd")'
+);
 
 command_fails_like(
 	[ 'pg_restore', 'qqq', 'abc' ],
 	qr/\Qpg_restore: too many command-line arguments (first is "abc")\E/,
-	'pg_restore too many command-line arguments (first is "abc")');
+	'pg_restore too many command-line arguments (first is "abc")'
+);
 
 command_fails_like(
 	[ 'pg_dumpall', 'qqq', 'abc' ],
 	qr/\Qpg_dumpall: too many command-line arguments (first is "qqq")\E/,
-	'pg_dumpall: too many command-line arguments (first is "qqq")');
+	'pg_dumpall: too many command-line arguments (first is "qqq")'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-s', '-a' ],
@@ -58,12 +61,14 @@ command_fails_like(
 command_fails_like(
 	[ 'pg_restore', '-d', 'xxx', '-f', 'xxx' ],
 	qr/\Qpg_restore: options -d\/--dbname and -f\/--file cannot be used together\E/,
-	'pg_restore: options -d/--dbname and -f/--file cannot be used together');
+	'pg_restore: options -d/--dbname and -f/--file cannot be used together'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-c', '-a' ],
 	qr/\Qpg_dump: options -c\/--clean and -a\/--data-only cannot be used together\E/,
-	'pg_dump: options -c/--clean and -a/--data-only cannot be used together');
+	'pg_dump: options -c/--clean and -a/--data-only cannot be used together'
+);
 
 command_fails_like(
 	[ 'pg_restore', '-c', '-a' ],
@@ -80,47 +85,56 @@ command_fails_like(
 command_fails_like(
 	[ 'pg_dump', '--if-exists' ],
 	qr/\Qpg_dump: option --if-exists requires option -c\/--clean\E/,
-	'pg_dump: option --if-exists requires option -c/--clean');
+	'pg_dump: option --if-exists requires option -c/--clean'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-j3' ],
 	qr/\Qpg_dump: parallel backup only supported by the directory format\E/,
-	'pg_dump: parallel backup only supported by the directory format');
+	'pg_dump: parallel backup only supported by the directory format'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-j', '-1' ],
 	qr/\Qpg_dump: invalid number of parallel jobs\E/,
-	'pg_dump: invalid number of parallel jobs');
+	'pg_dump: invalid number of parallel jobs'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-F', 'garbage' ],
 	qr/\Qpg_dump: invalid output format\E/,
-	'pg_dump: invalid output format');
+	'pg_dump: invalid output format'
+);
 
 command_fails_like(
 	[ 'pg_restore', '-j', '-1' ],
 	qr/\Qpg_restore: invalid number of parallel jobs\E/,
-	'pg_restore: invalid number of parallel jobs');
+	'pg_restore: invalid number of parallel jobs'
+);
 
 command_fails_like(
 	[ 'pg_restore', '--single-transaction', '-j3' ],
 	qr/\Qpg_restore: cannot specify both --single-transaction and multiple jobs\E/,
-	'pg_restore: cannot specify both --single-transaction and multiple jobs');
+	'pg_restore: cannot specify both --single-transaction and multiple jobs'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-Z', '-1' ],
 	qr/\Qpg_dump: compression level must be in range 0..9\E/,
-	'pg_dump: compression level must be in range 0..9');
+	'pg_dump: compression level must be in range 0..9'
+);
 
 command_fails_like(
 	[ 'pg_restore', '--if-exists' ],
 	qr/\Qpg_restore: option --if-exists requires option -c\/--clean\E/,
-	'pg_restore: option --if-exists requires option -c/--clean');
+	'pg_restore: option --if-exists requires option -c/--clean'
+);
 
 command_fails_like(
 	[ 'pg_restore', '-F', 'garbage' ],
 	qr/\Qpg_restore: unrecognized archive format "garbage";\E/,
-	'pg_dump: unrecognized archive format');
+	'pg_dump: unrecognized archive format'
+);
 
 # pg_dumpall command-line argument checks
 command_fails_like(
@@ -144,4 +158,5 @@ command_fails_like(
 command_fails_like(
 	[ 'pg_dumpall', '--if-exists' ],
 	qr/\Qpg_dumpall: option --if-exists requires option -c\/--clean\E/,
-	'pg_dumpall: option --if-exists requires option -c/--clean');
+	'pg_dumpall: option --if-exists requires option -c/--clean'
+);
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 81cd65e..487c863 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -50,7 +50,9 @@ my %pgdump_runs = (
 		restore_cmd => [
 			'pg_restore', '-Fc', '--verbose',
 			"--file=$tempdir/binary_upgrade.sql",
-			"$tempdir/binary_upgrade.dump", ], },
+			"$tempdir/binary_upgrade.dump",
+		],
+	},
 	clean => {
 		dump_cmd => [
 			'pg_dump',
@@ -58,7 +60,8 @@ my %pgdump_runs = (
 			"--file=$tempdir/clean.sql",
 			'-c',
 			'-d', 'postgres',    # alternative way to specify database
-		], },
+		],
+	},
 	clean_if_exists => {
 		dump_cmd => [
 			'pg_dump',
@@ -67,12 +70,16 @@ my %pgdump_runs = (
 			'-c',
 			'--if-exists',
 			'--encoding=UTF8',    # no-op, just tests that option is accepted
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	column_inserts => {
 		dump_cmd => [
 			'pg_dump',                            '--no-sync',
 			"--file=$tempdir/column_inserts.sql", '-a',
-			'--column-inserts',                   'postgres', ], },
+			'--column-inserts',                   'postgres',
+		],
+	},
 	createdb => {
 		dump_cmd => [
 			'pg_dump',
@@ -81,7 +88,9 @@ my %pgdump_runs = (
 			'-C',
 			'-R',    # no-op, just for testing
 			'-v',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	data_only => {
 		dump_cmd => [
 			'pg_dump',
@@ -91,78 +100,102 @@ my %pgdump_runs = (
 			'--superuser=test_superuser',
 			'--disable-triggers',
 			'-v',    # no-op, just make sure it works
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			'-f',      "$tempdir/defaults.sql",
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults_no_public => {
 		database => 'regress_pg_dump_test',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-f', "$tempdir/defaults_no_public.sql",
-			'regress_pg_dump_test', ], },
+			'regress_pg_dump_test',
+		],
+	},
 	defaults_no_public_clean => {
 		database => 'regress_pg_dump_test',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-c', '-f',
 			"$tempdir/defaults_no_public_clean.sql",
-			'regress_pg_dump_test', ], },
+			'regress_pg_dump_test',
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_custom_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '-Fc', '-Z6',
-			"--file=$tempdir/defaults_custom_format.dump", 'postgres', ],
+			"--file=$tempdir/defaults_custom_format.dump", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', '-Fc',
 			"--file=$tempdir/defaults_custom_format.sql",
-			"$tempdir/defaults_custom_format.dump", ], },
+			"$tempdir/defaults_custom_format.dump",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_dir_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump',                             '-Fd',
-			"--file=$tempdir/defaults_dir_format", 'postgres', ],
+			"--file=$tempdir/defaults_dir_format", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', '-Fd',
 			"--file=$tempdir/defaults_dir_format.sql",
-			"$tempdir/defaults_dir_format", ], },
+			"$tempdir/defaults_dir_format",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_parallel => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '-Fd', '-j2', "--file=$tempdir/defaults_parallel",
-			'postgres', ],
+			'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_parallel.sql",
-			"$tempdir/defaults_parallel", ], },
+			"$tempdir/defaults_parallel",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_tar_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump',                                 '-Ft',
-			"--file=$tempdir/defaults_tar_format.tar", 'postgres', ],
+			"--file=$tempdir/defaults_tar_format.tar", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			'--format=tar',
 			"--file=$tempdir/defaults_tar_format.sql",
-			"$tempdir/defaults_tar_format.tar", ], },
+			"$tempdir/defaults_tar_format.tar",
+		],
+	},
 	exclude_dump_test_schema => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/exclude_dump_test_schema.sql",
-			'--exclude-schema=dump_test', 'postgres', ], },
+			'--exclude-schema=dump_test', 'postgres',
+		],
+	},
 	exclude_test_table => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/exclude_test_table.sql",
-			'--exclude-table=dump_test.test_table', 'postgres', ], },
+			'--exclude-table=dump_test.test_table', 'postgres',
+		],
+	},
 	exclude_test_table_data => {
 		dump_cmd => [
 			'pg_dump',
@@ -170,39 +203,55 @@ my %pgdump_runs = (
 			"--file=$tempdir/exclude_test_table_data.sql",
 			'--exclude-table-data=dump_test.test_table',
 			'--no-unlogged-table-data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall', '-v', "--file=$tempdir/pg_dumpall_globals.sql",
-			'-g', '--no-sync', ], },
+			'-g', '--no-sync',
+		],
+	},
 	pg_dumpall_globals_clean => {
 		dump_cmd => [
 			'pg_dumpall', "--file=$tempdir/pg_dumpall_globals_clean.sql",
-			'-g', '-c', '--no-sync', ], },
+			'-g', '-c', '--no-sync',
+		],
+	},
 	pg_dumpall_dbprivs => {
 		dump_cmd => [
 			'pg_dumpall', '--no-sync',
-			"--file=$tempdir/pg_dumpall_dbprivs.sql", ], },
+			"--file=$tempdir/pg_dumpall_dbprivs.sql",
+		],
+	},
 	no_blobs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_blobs.sql", '-B',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_privs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_privs.sql", '-x',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_owner => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_owner.sql", '-O',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	only_dump_test_schema => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/only_dump_test_schema.sql",
-			'--schema=dump_test', 'postgres', ], },
+			'--schema=dump_test', 'postgres',
+		],
+	},
 	only_dump_test_table => {
 		dump_cmd => [
 			'pg_dump',
@@ -210,7 +259,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/only_dump_test_table.sql",
 			'--table=dump_test.test_table',
 			'--lock-wait-timeout=1000000',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	role => {
 		dump_cmd => [
 			'pg_dump',
@@ -218,7 +269,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/role.sql",
 			'--role=regress_dump_test_role',
 			'--schema=dump_test_second_schema',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	role_parallel => {
 		test_key => 'role',
 		dump_cmd => [
@@ -229,39 +282,55 @@ my %pgdump_runs = (
 			"--file=$tempdir/role_parallel",
 			'--role=regress_dump_test_role',
 			'--schema=dump_test_second_schema',
-			'postgres', ],
+			'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', "--file=$tempdir/role_parallel.sql",
-			"$tempdir/role_parallel", ], },
+			"$tempdir/role_parallel",
+		],
+	},
 	schema_only => {
 		dump_cmd => [
 			'pg_dump',                         '--format=plain',
 			"--file=$tempdir/schema_only.sql", '--no-sync',
-			'-s',                              'postgres', ], },
+			'-s',                              'postgres',
+		],
+	},
 	section_pre_data => {
 		dump_cmd => [
 			'pg_dump',            "--file=$tempdir/section_pre_data.sql",
 			'--section=pre-data', '--no-sync',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_data => {
 		dump_cmd => [
 			'pg_dump',        "--file=$tempdir/section_data.sql",
 			'--section=data', '--no-sync',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_post_data => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/section_post_data.sql",
-			'--section=post-data', '--no-sync', 'postgres', ], },
+			'--section=post-data', '--no-sync', 'postgres',
+		],
+	},
 	test_schema_plus_blobs => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/test_schema_plus_blobs.sql",
 
-			'--schema=dump_test', '-b', '-B', '--no-sync', 'postgres', ], },
+			'--schema=dump_test', '-b', '-B', '--no-sync', 'postgres',
+		],
+	},
 	with_oids => {
 		dump_cmd => [
 			'pg_dump',   '--oids',
 			'--no-sync', "--file=$tempdir/with_oids.sql",
-			'postgres', ], },);
+			'postgres',
+		],
+	},
+);
 
 ###############################################################
 # Definition of the tests to run.
@@ -302,7 +371,8 @@ my %pgdump_runs = (
 # Tests which target the 'dump_test' schema, specifically.
 my %dump_test_schema_runs = (
 	only_dump_test_schema  => 1,
-	test_schema_plus_blobs => 1,);
+	test_schema_plus_blobs => 1,
+);
 
 # Tests which are considered 'full' dumps by pg_dump, but there
 # are flags used to exclude specific items (ACLs, blobs, etc).
@@ -320,7 +390,8 @@ my %full_runs = (
 	no_privs                 => 1,
 	pg_dumpall_dbprivs       => 1,
 	schema_only              => 1,
-	with_oids                => 1,);
+	with_oids                => 1,
+);
 
 # This is where the actual tests are defined.
 my %tests = (
@@ -338,7 +409,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE' => {
 		create_order => 55,
@@ -351,7 +424,8 @@ my %tests = (
 			\QREVOKE ALL ON FUNCTIONS  FROM PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE SELECT'
 	  => {
@@ -368,7 +442,8 @@ my %tests = (
 			\QGRANT INSERT,REFERENCES,DELETE,TRIGGER,TRUNCATE,UPDATE ON TABLES  TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	  },
 
 	'ALTER ROLE regress_dump_test_role' => {
 		regexp => qr/^
@@ -379,23 +454,28 @@ my %tests = (
 		like => {
 			pg_dumpall_dbprivs       => 1,
 			pg_dumpall_globals       => 1,
-			pg_dumpall_globals_clean => 1, }, },
+			pg_dumpall_globals_clean => 1,
+		},
+	},
 
 	'ALTER COLLATION test0 OWNER TO' => {
 		regexp    => qr/^ALTER COLLATION public.test0 OWNER TO .*;/m,
 		collation => 1,
 		like      => { %full_runs, section_pre_data => 1, },
-		unlike    => { %dump_test_schema_runs, no_owner => 1, }, },
+		unlike    => { %dump_test_schema_runs, no_owner => 1, },
+	},
 
 	'ALTER FOREIGN DATA WRAPPER dummy OWNER TO' => {
 		regexp => qr/^ALTER FOREIGN DATA WRAPPER dummy OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SERVER s1 OWNER TO' => {
 		regexp => qr/^ALTER SERVER s1 OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER FUNCTION dump_test.pltestlang_call_handler() OWNER TO' => {
 		regexp => qr/^
@@ -406,7 +486,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER OPERATOR FAMILY dump_test.op_family OWNER TO' => {
 		regexp => qr/^
@@ -417,7 +499,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER OPERATOR FAMILY dump_test.op_family USING btree' => {
 		create_order => 75,
@@ -442,7 +526,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER OPERATOR CLASS dump_test.op_class OWNER TO' => {
 		regexp => qr/^
@@ -453,12 +538,15 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER PUBLICATION pub1 OWNER TO' => {
 		regexp => qr/^ALTER PUBLICATION pub1 OWNER TO .*;/m,
 		like   => { %full_runs, section_post_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER LARGE OBJECT ... OWNER TO' => {
 		regexp => qr/^ALTER LARGE OBJECT \d+ OWNER TO .*;/m,
@@ -467,16 +555,20 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			no_blobs    => 1,
 			no_owner    => 1,
-			schema_only => 1, }, },
+			schema_only => 1,
+		},
+	},
 
 	'ALTER PROCEDURAL LANGUAGE pltestlang OWNER TO' => {
 		regexp => qr/^ALTER PROCEDURAL LANGUAGE pltestlang OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SCHEMA dump_test OWNER TO' => {
 		regexp => qr/^ALTER SCHEMA dump_test OWNER TO .*;/m,
@@ -484,15 +576,19 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER SCHEMA dump_test_second_schema OWNER TO' => {
 		regexp => qr/^ALTER SCHEMA dump_test_second_schema OWNER TO .*;/m,
 		like   => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SEQUENCE test_table_col1_seq' => {
 		regexp => qr/^
@@ -502,10 +598,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER SEQUENCE test_third_table_col1_seq' => {
 		regexp => qr/^
@@ -514,7 +613,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ADD CONSTRAINT ... PRIMARY KEY' => {
 		regexp => qr/^
@@ -525,10 +626,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col1 SET STATISTICS 90' => {
 		create_order => 93,
@@ -541,10 +645,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col2 SET STORAGE' => {
 		create_order => 94,
@@ -557,10 +664,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col3 SET STORAGE' => {
 		create_order => 95,
@@ -573,10 +683,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col4 SET n_distinct' => {
 		create_order => 95,
@@ -589,10 +702,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY dump_test.measurement ATTACH PARTITION measurement_y2006m2'
 	  => {
@@ -600,7 +716,8 @@ my %tests = (
 			\QALTER TABLE ONLY dump_test.measurement ATTACH PARTITION dump_test_second_schema.measurement_y2006m2 \E
 			\QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n
 			/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	  },
 
 	'ALTER TABLE test_table CLUSTER ON test_table_pkey' => {
 		create_order => 96,
@@ -613,10 +730,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE test_table DISABLE TRIGGER ALL' => {
 		regexp => qr/^
@@ -625,7 +745,8 @@ my %tests = (
 			\QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E
 			\n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n\n\n
 			\QALTER TABLE dump_test.test_table ENABLE TRIGGER ALL;\E/xm,
-		like => { data_only => 1, }, },
+		like => { data_only => 1, },
+	},
 
 	'ALTER FOREIGN TABLE foreign_table ALTER COLUMN c1 OPTIONS' => {
 		regexp => qr/^
@@ -635,7 +756,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER TABLE test_table OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.test_table OWNER TO .*;/m,
@@ -643,11 +765,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE test_table ENABLE ROW LEVEL SECURITY' => {
 		create_order => 23,
@@ -659,10 +784,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE test_second_table OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.test_second_table OWNER TO .*;/m,
@@ -670,7 +798,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE test_third_table OWNER TO' => {
 		regexp =>
@@ -678,8 +808,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER TABLE measurement OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.measurement OWNER TO .*;/m,
@@ -687,7 +819,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE measurement_y2006m2 OWNER TO' => {
 		regexp =>
@@ -695,8 +829,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER FOREIGN TABLE foreign_table OWNER TO' => {
 		regexp =>
@@ -705,7 +841,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 OWNER TO' => {
 		regexp =>
@@ -714,7 +852,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 OWNER TO' => {
 		regexp =>
@@ -725,7 +865,9 @@ my %tests = (
 			exclude_dump_test_schema => 1,
 			only_dump_test_table     => 1,
 			no_owner                 => 1,
-			role                     => 1, }, },
+			role                     => 1,
+		},
+	},
 
 	'BLOB create (using lo_from_bytea)' => {
 		create_order => 50,
@@ -737,10 +879,13 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			schema_only => 1,
-			no_blobs    => 1, }, },
+			no_blobs    => 1,
+		},
+	},
 
 	'BLOB load (using lo_from_bytea)' => {
 		regexp => qr/^
@@ -754,23 +899,28 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_data           => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			binary_upgrade => 1,
 			no_blobs       => 1,
-			schema_only    => 1, }, },
+			schema_only    => 1,
+		},
+	},
 
 	'COMMENT ON DATABASE postgres' => {
 		regexp => qr/^COMMENT ON DATABASE postgres IS .*;/m,
 
 		# Should appear in the same tests as "CREATE DATABASE postgres"
-		like => { createdb => 1, }, },
+		like => { createdb => 1, },
+	},
 
 	'COMMENT ON EXTENSION plpgsql' => {
 		regexp => qr/^COMMENT ON EXTENSION plpgsql IS .*;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'COMMENT ON TABLE dump_test.test_table' => {
 		create_order => 36,
@@ -782,10 +932,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'COMMENT ON COLUMN dump_test.test_table.col1' => {
 		create_order => 36,
@@ -798,10 +951,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'COMMENT ON COLUMN dump_test.composite.f1' => {
 		create_order => 44,
@@ -812,7 +968,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLUMN dump_test.test_second_table.col1' => {
 		create_order => 63,
@@ -823,7 +980,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLUMN dump_test.test_second_table.col2' => {
 		create_order => 64,
@@ -834,7 +992,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON CONVERSION dump_test.test_conversion' => {
 		create_order => 79,
@@ -844,7 +1003,8 @@ my %tests = (
 		  qr/^COMMENT ON CONVERSION dump_test.test_conversion IS 'comment on test conversion';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLLATION test0' => {
 		create_order => 77,
@@ -853,7 +1013,8 @@ my %tests = (
 		regexp =>
 		  qr/^COMMENT ON COLLATION public.test0 IS 'comment on test0 collation';/m,
 		collation => 1,
-		like      => { %full_runs, section_pre_data => 1, }, },
+		like      => { %full_runs, section_pre_data => 1, },
+	},
 
 	'COMMENT ON LARGE OBJECT ...' => {
 		create_order => 65,
@@ -872,10 +1033,13 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			no_blobs    => 1,
-			schema_only => 1, }, },
+			schema_only => 1,
+		},
+	},
 
 	'COMMENT ON PUBLICATION pub1' => {
 		create_order => 55,
@@ -883,7 +1047,8 @@ my %tests = (
 					   IS \'comment on publication\';',
 		regexp =>
 		  qr/^COMMENT ON PUBLICATION pub1 IS 'comment on publication';/m,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'COMMENT ON SUBSCRIPTION sub1' => {
 		create_order => 55,
@@ -891,7 +1056,8 @@ my %tests = (
 					   IS \'comment on subscription\';',
 		regexp =>
 		  qr/^COMMENT ON SUBSCRIPTION sub1 IS 'comment on subscription';/m,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
 		create_order => 84,
@@ -902,7 +1068,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 IS 'comment on text search configuration';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
 		create_order => 84,
@@ -913,7 +1080,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 IS 'comment on text search dictionary';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
 		create_order => 84,
@@ -923,7 +1091,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1 IS 'comment on text search parser';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
 		create_order => 84,
@@ -933,7 +1102,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 IS 'comment on text search template';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.planets - ENUM' => {
 		create_order => 68,
@@ -943,7 +1113,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.planets IS 'comment on enum type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.textrange - RANGE' => {
 		create_order => 69,
@@ -953,7 +1124,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.textrange IS 'comment on range type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.int42 - Regular' => {
 		create_order => 70,
@@ -963,7 +1135,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.int42 IS 'comment on regular type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.undefined - Undefined' => {
 		create_order => 71,
@@ -973,7 +1146,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.undefined IS 'comment on undefined type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COPY test_table' => {
 		create_order => 4,
@@ -988,13 +1162,16 @@ my %tests = (
 			%dump_test_schema_runs,
 			data_only            => 1,
 			only_dump_test_table => 1,
-			section_data         => 1, },
+			section_data         => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
 			exclude_test_table_data  => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY fk_reference_test_table' => {
 		create_order => 22,
@@ -1010,11 +1187,14 @@ my %tests = (
 			data_only               => 1,
 			exclude_test_table      => 1,
 			exclude_test_table_data => 1,
-			section_data            => 1, },
+			section_data            => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	# In a data-only dump, we try to actually order according to FKs,
 	# so this check is just making sure that the referring table comes after
@@ -1026,7 +1206,8 @@ my %tests = (
 			\QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E
 			\n(?:\d\n){5}\\\.\n
 			/xms,
-		like => { data_only => 1, }, },
+		like => { data_only => 1, },
+	},
 
 	'COPY test_second_table' => {
 		create_order => 7,
@@ -1041,11 +1222,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_third_table' => {
 		create_order => 12,
@@ -1060,19 +1244,23 @@ my %tests = (
 			%full_runs,
 			data_only    => 1,
 			role         => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade          => 1,
 			exclude_test_table_data => 1,
 			schema_only             => 1,
-			with_oids               => 1, }, },
+			with_oids               => 1,
+		},
+	},
 
 	'COPY test_third_table WITH OIDS' => {
 		regexp => qr/^
 			\QCOPY dump_test_second_schema.test_third_table (col1) WITH OIDS FROM stdin;\E
 			\n(?:\d+\t\d\n){9}\\\.\n
 			/xm,
-		like => { with_oids => 1, }, },
+		like => { with_oids => 1, },
+	},
 
 	'COPY test_fourth_table' => {
 		create_order => 7,
@@ -1086,11 +1274,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_fifth_table' => {
 		create_order => 54,
@@ -1104,11 +1295,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_table_identity' => {
 		create_order => 54,
@@ -1122,44 +1316,53 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'INSERT INTO test_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test.test_table\ \(col1,\ col2,\ col3,\ col4\)\ VALUES\ \(\d,\ NULL,\ NULL,\ NULL\);\n){9}
 			/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_second_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test.test_second_table\ \(col1,\ col2\)
 			   \ VALUES\ \(\d,\ '\d'\);\n){9}/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_third_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test_second_schema.test_third_table\ \(col1\)
 			   \ VALUES\ \(\d\);\n){9}/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_fourth_table' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_fourth_table DEFAULT VALUES;\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_fifth_table' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_fifth_table (col1, col2, col3, col4, col5) VALUES (NULL, true, false, B'11001', 'NaN');\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_table_identity' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_table_identity (col1, col2) OVERRIDING SYSTEM VALUE VALUES (1, 'test');\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'CREATE ROLE regress_dump_test_role' => {
 		create_order => 1,
@@ -1168,7 +1371,9 @@ my %tests = (
 		like         => {
 			pg_dumpall_dbprivs       => 1,
 			pg_dumpall_globals       => 1,
-			pg_dumpall_globals_clean => 1, }, },
+			pg_dumpall_globals_clean => 1,
+		},
+	},
 
 	'CREATE ACCESS METHOD gist2' => {
 		create_order => 52,
@@ -1176,7 +1381,8 @@ my %tests = (
 		  'CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;',
 		regexp =>
 		  qr/CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE COLLATION test0 FROM "C"' => {
 		create_order => 76,
@@ -1184,7 +1390,8 @@ my %tests = (
 		regexp       => qr/^
 		  \QCREATE COLLATION public.test0 (provider = libc, locale = 'C');\E/xm,
 		collation => 1,
-		like      => { %full_runs, section_pre_data => 1, }, },
+		like      => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE CAST FOR timestamptz' => {
 		create_order => 51,
@@ -1192,13 +1399,15 @@ my %tests = (
 		  'CREATE CAST (timestamptz AS interval) WITH FUNCTION age(timestamptz) AS ASSIGNMENT;',
 		regexp =>
 		  qr/CREATE CAST \(timestamp with time zone AS interval\) WITH FUNCTION pg_catalog\.age\(timestamp with time zone\) AS ASSIGNMENT;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE DATABASE postgres' => {
 		regexp => qr/^
 			\QCREATE DATABASE postgres WITH TEMPLATE = template0 \E
 			.*;/xm,
-		like => { createdb => 1, }, },
+		like => { createdb => 1, },
+	},
 
 	'CREATE DATABASE dump_test' => {
 		create_order => 47,
@@ -1206,7 +1415,8 @@ my %tests = (
 		regexp       => qr/^
 			\QCREATE DATABASE dump_test WITH TEMPLATE = template0 \E
 			.*;/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'CREATE EXTENSION ... plpgsql' => {
 		regexp => qr/^
@@ -1214,7 +1424,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'CREATE AGGREGATE dump_test.newavg' => {
 		create_order => 25,
@@ -1238,8 +1449,10 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			exclude_test_table => 1,
-			section_pre_data   => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+			section_pre_data   => 1,
+		},
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE CONVERSION dump_test.test_conversion' => {
 		create_order => 78,
@@ -1249,7 +1462,8 @@ my %tests = (
 		  qr/^\QCREATE DEFAULT CONVERSION dump_test.test_conversion FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE DOMAIN dump_test.us_postal_code' => {
 		create_order => 29,
@@ -1267,7 +1481,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.pltestlang_call_handler' => {
 		create_order => 17,
@@ -1283,7 +1498,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.trigger_func' => {
 		create_order => 30,
@@ -1298,7 +1514,8 @@ my %tests = (
 			\$\$;/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.event_trigger_func' => {
 		create_order => 32,
@@ -1313,7 +1530,8 @@ my %tests = (
 			\$\$;/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE OPERATOR FAMILY dump_test.op_family' => {
 		create_order => 73,
@@ -1324,7 +1542,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE OPERATOR CLASS dump_test.op_class' => {
 		create_order => 74,
@@ -1351,7 +1570,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE EVENT TRIGGER test_event_trigger' => {
 		create_order => 33,
@@ -1363,7 +1583,8 @@ my %tests = (
 			\QON ddl_command_start\E
 			\n\s+\QEXECUTE PROCEDURE dump_test.event_trigger_func();\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE TRIGGER test_trigger' => {
 		create_order => 31,
@@ -1380,10 +1601,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_test_table       => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TYPE dump_test.planets AS ENUM' => {
 		create_order => 37,
@@ -1399,7 +1623,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			binary_upgrade           => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TYPE dump_test.planets AS ENUM pg_upgrade' => {
 		regexp => qr/^
@@ -1411,7 +1637,8 @@ my %tests = (
 			\n.*^
 			\QALTER TYPE dump_test.planets ADD VALUE 'mars';\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TYPE dump_test.textrange AS RANGE' => {
 		create_order => 38,
@@ -1424,7 +1651,8 @@ my %tests = (
 			\n\);/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.int42' => {
 		create_order => 39,
@@ -1432,7 +1660,8 @@ my %tests = (
 		regexp       => qr/^CREATE TYPE dump_test.int42;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
 		create_order => 80,
@@ -1443,7 +1672,8 @@ my %tests = (
 			\s+\QPARSER = pg_catalog."default" );\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 ...' => {
 		regexp => qr/^
@@ -1507,7 +1737,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
 		create_order => 81,
@@ -1518,7 +1749,8 @@ my %tests = (
 			\s+\QLEXIZE = dsimple_lexize );\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
 		create_order => 82,
@@ -1533,7 +1765,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
 		create_order => 83,
@@ -1545,7 +1778,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.int42_in' => {
 		create_order => 40,
@@ -1559,7 +1793,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.int42_out' => {
 		create_order => 41,
@@ -1573,7 +1808,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE PROCEDURE dump_test.ptest1' => {
 		create_order => 41,
@@ -1586,7 +1822,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.int42 populated' => {
 		create_order => 42,
@@ -1609,7 +1846,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.composite' => {
 		create_order => 43,
@@ -1625,7 +1863,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.undefined' => {
 		create_order => 39,
@@ -1633,19 +1872,22 @@ my %tests = (
 		regexp       => qr/^CREATE TYPE dump_test.undefined;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FOREIGN DATA WRAPPER dummy' => {
 		create_order => 35,
 		create_sql   => 'CREATE FOREIGN DATA WRAPPER dummy;',
 		regexp       => qr/CREATE FOREIGN DATA WRAPPER dummy;/m,
-		like         => { %full_runs, section_pre_data => 1, }, },
+		like         => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE SERVER s1 FOREIGN DATA WRAPPER dummy' => {
 		create_order => 36,
 		create_sql   => 'CREATE SERVER s1 FOREIGN DATA WRAPPER dummy;',
 		regexp       => qr/CREATE SERVER s1 FOREIGN DATA WRAPPER dummy;/m,
-		like         => { %full_runs, section_pre_data => 1, }, },
+		like         => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE FOREIGN TABLE dump_test.foreign_table SERVER s1' => {
 		create_order => 88,
@@ -1663,7 +1905,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1' => {
 		create_order => 86,
@@ -1671,7 +1914,8 @@ my %tests = (
 		  'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;',
 		regexp =>
 		  qr/CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE TRANSFORM FOR int' => {
 		create_order => 34,
@@ -1679,7 +1923,8 @@ my %tests = (
 		  'CREATE TRANSFORM FOR int LANGUAGE SQL (FROM SQL WITH FUNCTION varchar_transform(internal), TO SQL WITH FUNCTION int4recv(internal));',
 		regexp =>
 		  qr/CREATE TRANSFORM FOR integer LANGUAGE sql \(FROM SQL WITH FUNCTION pg_catalog\.varchar_transform\(internal\), TO SQL WITH FUNCTION pg_catalog\.int4recv\(internal\)\);/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE LANGUAGE pltestlang' => {
 		create_order => 18,
@@ -1690,7 +1935,8 @@ my %tests = (
 			\QHANDLER dump_test.pltestlang_call_handler;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview' => {
 		create_order => 20,
@@ -1704,7 +1950,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_second' => {
 		create_order => 21,
@@ -1719,7 +1966,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_third' => {
 		create_order => 58,
@@ -1734,7 +1982,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_fourth' => {
 		create_order => 59,
@@ -1749,7 +1998,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE POLICY p1 ON test_table' => {
 		create_order => 22,
@@ -1764,10 +2014,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p2 ON test_table FOR SELECT' => {
 		create_order => 24,
@@ -1781,10 +2034,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p3 ON test_table FOR INSERT' => {
 		create_order => 25,
@@ -1798,10 +2054,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p4 ON test_table FOR UPDATE' => {
 		create_order => 26,
@@ -1815,10 +2074,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p5 ON test_table FOR DELETE' => {
 		create_order => 27,
@@ -1832,10 +2094,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p6 ON test_table AS RESTRICTIVE' => {
 		create_order => 27,
@@ -1849,10 +2114,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE PUBLICATION pub1' => {
 		create_order => 50,
@@ -1860,7 +2128,8 @@ my %tests = (
 		regexp       => qr/^
 			\QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE PUBLICATION pub2' => {
 		create_order => 50,
@@ -1870,7 +2139,8 @@ my %tests = (
 		regexp => qr/^
 			\QCREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish = '');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
@@ -1880,7 +2150,8 @@ my %tests = (
 		regexp => qr/^
 			\QCREATE SUBSCRIPTION sub1 CONNECTION 'dbname=doesnotexist' PUBLICATION pub1 WITH (connect = false, slot_name = 'sub1');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'ALTER PUBLICATION pub1 ADD TABLE test_table' => {
 		create_order => 51,
@@ -1892,7 +2163,9 @@ my %tests = (
 		like   => { %full_runs, section_post_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER PUBLICATION pub1 ADD TABLE test_second_table' => {
 		create_order => 52,
@@ -1902,13 +2175,15 @@ my %tests = (
 			\QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_second_table;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SCHEMA public' => {
 		regexp => qr/^CREATE SCHEMA public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'CREATE SCHEMA dump_test' => {
 		create_order => 2,
@@ -1916,7 +2191,8 @@ my %tests = (
 		regexp       => qr/^CREATE SCHEMA dump_test;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SCHEMA dump_test_second_schema' => {
 		create_order => 9,
@@ -1925,7 +2201,9 @@ my %tests = (
 		like         => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'CREATE TABLE test_table' => {
 		create_order => 3,
@@ -1949,10 +2227,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE TABLE fk_reference_test_table' => {
 		create_order => 21,
@@ -1966,7 +2247,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_second_table' => {
 		create_order => 6,
@@ -1982,7 +2264,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE UNLOGGED TABLE test_third_table WITH OIDS' => {
 		create_order => 11,
@@ -2003,11 +2286,14 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
+			section_pre_data => 1,
+		},
 		unlike => {
 
 			# FIXME figure out why/how binary upgrade drops OIDs.
-			binary_upgrade => 1, }, },
+			binary_upgrade => 1,
+		},
+	},
 
 	'CREATE TABLE measurement PARTITIONED BY' => {
 		create_order => 90,
@@ -2032,7 +2318,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			binary_upgrade           => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TABLE measurement_y2006m2 PARTITION OF' => {
 		create_order => 91,
@@ -2049,8 +2337,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { binary_upgrade => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { binary_upgrade => 1, },
+	},
 
 	'CREATE TABLE test_fourth_table_zero_col' => {
 		create_order => 6,
@@ -2062,7 +2352,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_fifth_table' => {
 		create_order => 53,
@@ -2084,7 +2375,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_table_identity' => {
 		create_order => 3,
@@ -2109,7 +2401,8 @@ my %tests = (
 			/xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE STATISTICS extended_stats_no_options' => {
 		create_order => 97,
@@ -2120,7 +2413,8 @@ my %tests = (
 		    /xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE STATISTICS extended_stats_options' => {
 		create_order => 97,
@@ -2131,7 +2425,8 @@ my %tests = (
 		    /xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SEQUENCE test_table_col1_seq' => {
 		regexp => qr/^
@@ -2147,8 +2442,10 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+			section_pre_data     => 1,
+		},
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SEQUENCE test_third_table_col1_seq' => {
 		regexp => qr/^
@@ -2163,7 +2460,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'CREATE UNIQUE INDEX test_third_table_idx ON test_third_table' => {
 		create_order => 13,
@@ -2176,7 +2475,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'CREATE INDEX ON ONLY measurement' => {
 		create_order => 92,
@@ -2201,14 +2502,17 @@ my %tests = (
 			schema_only             => 1,
 			section_post_data       => 1,
 			test_schema_plus_blobs  => 1,
-			with_oids               => 1, },
+			with_oids               => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			only_dump_test_table     => 1,
 			pg_dumpall_globals       => 1,
 			pg_dumpall_globals_clean => 1,
 			role                     => 1,
-			section_pre_data         => 1, }, },
+			section_pre_data         => 1,
+		},
+	},
 
 	'ALTER TABLE measurement PRIMARY KEY' => {
 		all_runs     => 1,
@@ -2222,7 +2526,8 @@ my %tests = (
 		/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE INDEX ... ON measurement_y2006_m2' => {
 		regexp => qr/^
@@ -2231,7 +2536,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'ALTER INDEX ... ATTACH PARTITION' => {
 		regexp => qr/^
@@ -2240,7 +2547,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'ALTER INDEX ... ATTACH PARTITION (primary key)' => {
 		all_runs  => 1,
@@ -2264,14 +2573,17 @@ my %tests = (
 			role                     => 1,
 			schema_only              => 1,
 			section_post_data        => 1,
-			with_oids                => 1, },
+			with_oids                => 1,
+		},
 		unlike => {
 			only_dump_test_schema    => 1,
 			only_dump_test_table     => 1,
 			pg_dumpall_globals       => 1,
 			pg_dumpall_globals_clean => 1,
 			section_pre_data         => 1,
-			test_schema_plus_blobs   => 1, }, },
+			test_schema_plus_blobs   => 1,
+		},
+	},
 
 	'CREATE VIEW test_view' => {
 		create_order => 61,
@@ -2285,7 +2597,8 @@ my %tests = (
 			\n\s+\QWITH LOCAL CHECK OPTION;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER VIEW test_view SET DEFAULT' => {
 		create_order => 62,
@@ -2295,7 +2608,8 @@ my %tests = (
 			\QALTER TABLE ONLY dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	# FIXME
 	'DROP SCHEMA public (for testing without public schema)' => {
@@ -2303,101 +2617,122 @@ my %tests = (
 		create_order => 100,
 		create_sql   => 'DROP SCHEMA public;',
 		regexp       => qr/^DROP SCHEMA public;/m,
-		like         => {}, },
+		like         => {},
+	},
 
 	'DROP SCHEMA public' => {
 		regexp => qr/^DROP SCHEMA public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP SCHEMA IF EXISTS public' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP EXTENSION plpgsql' => {
 		regexp => qr/^DROP EXTENSION plpgsql;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP FUNCTION dump_test.pltestlang_call_handler()' => {
 		regexp => qr/^DROP FUNCTION dump_test\.pltestlang_call_handler\(\);/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP LANGUAGE pltestlang' => {
 		regexp => qr/^DROP PROCEDURAL LANGUAGE pltestlang;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP SCHEMA dump_test' => {
 		regexp => qr/^DROP SCHEMA dump_test;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP SCHEMA dump_test_second_schema' => {
 		regexp => qr/^DROP SCHEMA dump_test_second_schema;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_table' => {
 		regexp => qr/^DROP TABLE dump_test\.test_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE fk_reference_test_table' => {
 		regexp => qr/^DROP TABLE dump_test\.fk_reference_test_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_second_table' => {
 		regexp => qr/^DROP TABLE dump_test\.test_second_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_third_table' => {
 		regexp => qr/^DROP TABLE dump_test_second_schema\.test_third_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP EXTENSION IF EXISTS plpgsql' => {
 		regexp => qr/^DROP EXTENSION IF EXISTS plpgsql;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler()' => {
 		regexp => qr/^
 			\QDROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler();\E
 			/xm,
-		like => { clean_if_exists => 1, }, },
+		like => { clean_if_exists => 1, },
+	},
 
 	'DROP LANGUAGE IF EXISTS pltestlang' => {
 		regexp => qr/^DROP PROCEDURAL LANGUAGE IF EXISTS pltestlang;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP SCHEMA IF EXISTS dump_test' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS dump_test;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP SCHEMA IF EXISTS dump_test_second_schema' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS dump_test_second_schema;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_table' => {
 		regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_table;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_second_table' => {
 		regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_second_table;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_third_table' => {
 		regexp => qr/^
 			\QDROP TABLE IF EXISTS dump_test_second_schema.test_third_table;\E
 			/xm,
-		like => { clean_if_exists => 1, }, },
+		like => { clean_if_exists => 1, },
+	},
 
 	'DROP ROLE regress_dump_test_role' => {
 		regexp => qr/^
 			\QDROP ROLE regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_globals_clean => 1, }, },
+		like => { pg_dumpall_globals_clean => 1, },
+	},
 
 	'DROP ROLE pg_' => {
 		regexp => qr/^
@@ -2405,7 +2740,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anywhere
-		like => {}, },
+		like => {},
+	},
 
 	'GRANT USAGE ON SCHEMA dump_test_second_schema' => {
 		create_order => 10,
@@ -2417,8 +2753,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON FOREIGN DATA WRAPPER dummy' => {
 		create_order => 85,
@@ -2428,7 +2766,8 @@ my %tests = (
 			\QGRANT ALL ON FOREIGN DATA WRAPPER dummy TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON FOREIGN SERVER s1' => {
 		create_order => 85,
@@ -2438,7 +2777,8 @@ my %tests = (
 			\QGRANT ALL ON FOREIGN SERVER s1 TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON DOMAIN dump_test.us_postal_code' => {
 		create_order => 72,
@@ -2451,7 +2791,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.int42' => {
 		create_order => 87,
@@ -2464,7 +2806,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.planets - ENUM' => {
 		create_order => 66,
@@ -2477,7 +2821,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.textrange - RANGE' => {
 		create_order => 67,
@@ -2490,7 +2836,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT CREATE ON DATABASE dump_test' => {
 		create_order => 48,
@@ -2499,7 +2847,8 @@ my %tests = (
 		regexp => qr/^
 			\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE test_table' => {
 		create_order => 5,
@@ -2511,11 +2860,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT SELECT ON TABLE test_third_table' => {
 		create_order => 19,
@@ -2527,8 +2879,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT ALL ON SEQUENCE test_third_table_col1_seq' => {
 		create_order => 28,
@@ -2541,8 +2895,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE measurement' => {
 		create_order => 91,
@@ -2555,7 +2911,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT SELECT ON TABLE measurement_y2006m2' => {
 		create_order => 92,
@@ -2567,8 +2925,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT ALL ON LARGE OBJECT ...' => {
 		create_order => 60,
@@ -2587,12 +2947,15 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			binary_upgrade => 1,
 			no_blobs       => 1,
 			no_privs       => 1,
-			schema_only    => 1, }, },
+			schema_only    => 1,
+		},
+	},
 
 	'GRANT INSERT(col1) ON TABLE test_second_table' => {
 		create_order => 8,
@@ -2606,7 +2969,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT EXECUTE ON FUNCTION pg_sleep() TO regress_dump_test_role' => {
 		create_order => 16,
@@ -2616,7 +2981,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION pg_catalog.pg_sleep(double precision) TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT (proname ...) ON TABLE pg_proc TO public' => {
 		create_order => 46,
@@ -2684,7 +3050,8 @@ my %tests = (
 		\QGRANT SELECT(proconfig) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.*
 		\QGRANT SELECT(proacl) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E/xms,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON SCHEMA public TO public' => {
 		regexp => qr/^
@@ -2693,7 +3060,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'REFRESH MATERIALIZED VIEW matview' => {
 		regexp => qr/^REFRESH MATERIALIZED VIEW dump_test.matview;/m,
@@ -2702,7 +3070,9 @@ my %tests = (
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'REFRESH MATERIALIZED VIEW matview_second' => {
 		regexp => qr/^
@@ -2715,21 +3085,25 @@ my %tests = (
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	# FIXME
 	'REFRESH MATERIALIZED VIEW matview_third' => {
 		regexp => qr/^
 			\QREFRESH MATERIALIZED VIEW dump_test.matview_third;\E
 			/xms,
-		like => {}, },
+		like => {},
+	},
 
 	# FIXME
 	'REFRESH MATERIALIZED VIEW matview_fourth' => {
 		regexp => qr/^
 			\QREFRESH MATERIALIZED VIEW dump_test.matview_fourth;\E
 			/xms,
-		like => {}, },
+		like => {},
+	},
 
 	'REVOKE CONNECT ON DATABASE dump_test FROM public' => {
 		create_order => 49,
@@ -2739,7 +3113,8 @@ my %tests = (
 			\QGRANT TEMPORARY ON DATABASE dump_test TO PUBLIC;\E\n
 			\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'REVOKE EXECUTE ON FUNCTION pg_sleep() FROM public' => {
 		create_order => 15,
@@ -2749,7 +3124,8 @@ my %tests = (
 			\QREVOKE ALL ON FUNCTION pg_catalog.pg_sleep(double precision) FROM PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE SELECT ON TABLE pg_proc FROM public' => {
 		create_order => 45,
@@ -2757,7 +3133,8 @@ my %tests = (
 		regexp =>
 		  qr/^REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC;/m,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE CREATE ON SCHEMA public FROM public' => {
 		create_order => 16,
@@ -2767,7 +3144,8 @@ my %tests = (
 			\n\QGRANT USAGE ON SCHEMA public TO PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE USAGE ON LANGUAGE plpgsql FROM public' => {
 		create_order => 16,
@@ -2778,8 +3156,10 @@ my %tests = (
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
 			role                 => 1,
-			section_pre_data     => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data     => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 );
 
@@ -2800,7 +3180,8 @@ $node->psql(
 	'postgres',
 	"CREATE COLLATION testing FROM \"C\"; DROP COLLATION testing;",
 	on_error_stop => 0,
-	stderr        => \$collation_check_stderr);
+	stderr        => \$collation_check_stderr
+);
 
 if ($collation_check_stderr !~ /ERROR: /)
 {
@@ -2903,7 +3284,8 @@ foreach my $test (
 		{
 			0;
 		}
-	} keys %tests)
+	} keys %tests
+  )
 {
 	my $test_db = 'postgres';
 
@@ -2947,7 +3329,8 @@ command_fails_like(
 command_fails_like(
 	[ 'pg_dump', '-p', "$port", '--role=regress_dump_test_role' ],
 	qr/\Qpg_dump: [archiver (db)] query failed: ERROR:  permission denied for\E/,
-	'pg_dump: [archiver (db)] query failed: ERROR:  permission denied for');
+	'pg_dump: [archiver (db)] query failed: ERROR:  permission denied for'
+);
 
 #########################################
 # Test dumping a non-existent schema, table, and patterns with --strict-names
@@ -2955,22 +3338,26 @@ command_fails_like(
 command_fails_like(
 	[ 'pg_dump', '-p', "$port", '-n', 'nonexistant' ],
 	qr/\Qpg_dump: no matching schemas were found\E/,
-	'pg_dump: no matching schemas were found');
+	'pg_dump: no matching schemas were found'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-p', "$port", '-t', 'nonexistant' ],
 	qr/\Qpg_dump: no matching tables were found\E/,
-	'pg_dump: no matching tables were found');
+	'pg_dump: no matching tables were found'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-p', "$port", '--strict-names', '-n', 'nonexistant*' ],
 	qr/\Qpg_dump: no matching schemas were found for pattern\E/,
-	'pg_dump: no matching schemas were found for pattern');
+	'pg_dump: no matching schemas were found for pattern'
+);
 
 command_fails_like(
 	[ 'pg_dump', '-p', "$port", '--strict-names', '-t', 'nonexistant*' ],
 	qr/\Qpg_dump: no matching tables were found for pattern\E/,
-	'pg_dump: no matching tables were found for pattern');
+	'pg_dump: no matching tables were found for pattern'
+);
 
 #########################################
 # Run all runs
@@ -3030,16 +3417,24 @@ foreach my $run (sort keys %pgdump_runs)
 		if ($tests{$test}->{like}->{$test_key}
 			&& !defined($tests{$test}->{unlike}->{$test_key}))
 		{
-			if (!ok($output_file =~ $tests{$test}->{regexp},
-					"$run: should dump $test"))
+			if (
+				!ok(
+					$output_file =~ $tests{$test}->{regexp},
+					"$run: should dump $test"
+				)
+			  )
 			{
 				diag("Review $run results in $tempdir");
 			}
 		}
 		else
 		{
-			if (!ok($output_file !~ $tests{$test}->{regexp},
-					"$run: should not dump $test"))
+			if (
+				!ok(
+					$output_file !~ $tests{$test}->{regexp},
+					"$run: should not dump $test"
+				)
+			  )
 			{
 				diag("Review $run results in $tempdir");
 			}
diff --git a/src/bin/pg_dump/t/010_dump_connstr.pl b/src/bin/pg_dump/t/010_dump_connstr.pl
index bf9bd52..ae651f3 100644
--- a/src/bin/pg_dump/t/010_dump_connstr.pl
+++ b/src/bin/pg_dump/t/010_dump_connstr.pl
@@ -34,9 +34,12 @@ $node->init(extra => [ '--locale=C', '--encoding=LATIN1' ]);
 
 # prep pg_hba.conf and pg_ident.conf
 $node->run_log(
-	[   $ENV{PG_REGRESS}, '--config-auth',
+	[
+		$ENV{PG_REGRESS}, '--config-auth',
 		$node->data_dir,  '--create-role',
-		"$dbname1,$dbname2,$dbname3,$dbname4" ]);
+		"$dbname1,$dbname2,$dbname3,$dbname4"
+	]
+);
 $node->start;
 
 my $backupdir = $node->backup_dir;
@@ -54,25 +57,37 @@ foreach my $dbname ($dbname1, $dbname2, $dbname3, $dbname4, 'CamelCase')
 # For these tests, pg_dumpall -r is used because it produces a short
 # dump.
 $node->command_ok(
-	[   'pg_dumpall', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname1),
-		'-U', $dbname4 ],
-	'pg_dumpall with long ASCII name 1');
+		'-U', $dbname4
+	],
+	'pg_dumpall with long ASCII name 1'
+);
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname2),
-		'-U', $dbname3 ],
-	'pg_dumpall with long ASCII name 2');
+		'-U', $dbname3
+	],
+	'pg_dumpall with long ASCII name 2'
+);
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname3),
-		'-U', $dbname2 ],
-	'pg_dumpall with long ASCII name 3');
+		'-U', $dbname2
+	],
+	'pg_dumpall with long ASCII name 3'
+);
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname4),
-		'-U', $dbname1 ],
-	'pg_dumpall with long ASCII name 4');
+		'-U', $dbname1
+	],
+	'pg_dumpall with long ASCII name 4'
+);
 $node->command_ok(
 	[ 'pg_dumpall', '--no-sync', '-r', '-l', 'dbname=template1' ],
 	'pg_dumpall -l accepts connection string');
@@ -82,7 +97,8 @@ $node->run_log([ 'createdb', "foo\n\rbar" ]);
 # not sufficient to use -r here
 $node->command_fails(
 	[ 'pg_dumpall', '--no-sync', '-f', $discard ],
-	'pg_dumpall with \n\r in database name');
+	'pg_dumpall with \n\r in database name'
+);
 $node->run_log([ 'dropdb', "foo\n\rbar" ]);
 
 
@@ -91,9 +107,12 @@ $node->safe_psql($dbname1, 'CREATE TABLE t0()');
 
 # XXX no printed message when this fails, just SIGPIPE termination
 $node->command_ok(
-	[   'pg_dump', '-Fd', '--no-sync', '-j2', '-f', $dirfmt, '-U', $dbname1,
-		$node->connstr($dbname1) ],
-	'parallel dump');
+	[
+		'pg_dump', '-Fd', '--no-sync', '-j2', '-f', $dirfmt, '-U', $dbname1,
+		$node->connstr($dbname1)
+	],
+	'parallel dump'
+);
 
 # recreate $dbname1 for restore test
 $node->run_log([ 'dropdb',   $dbname1 ]);
@@ -101,15 +120,19 @@ $node->run_log([ 'createdb', $dbname1 ]);
 
 $node->command_ok(
 	[ 'pg_restore', '-v', '-d', 'template1', '-j2', '-U', $dbname1, $dirfmt ],
-	'parallel restore');
+	'parallel restore'
+);
 
 $node->run_log([ 'dropdb', $dbname1 ]);
 
 $node->command_ok(
-	[   'pg_restore', '-C',  '-v', '-d',
+	[
+		'pg_restore', '-C',  '-v', '-d',
 		'template1',  '-j2', '-U', $dbname1,
-		$dirfmt ],
-	'parallel restore with create');
+		$dirfmt
+	],
+	'parallel restore with create'
+);
 
 
 $node->command_ok([ 'pg_dumpall', '--no-sync', '-f', $plain, '-U', $dbname1 ],
@@ -127,9 +150,12 @@ my $envar_node = get_new_node('destination_envar');
 $envar_node->init(
 	extra => [ '-U', $bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);
 $envar_node->run_log(
-	[   $ENV{PG_REGRESS},      '--config-auth',
+	[
+		$ENV{PG_REGRESS},      '--config-auth',
 		$envar_node->data_dir, '--create-role',
-		"$bootstrap_super,$restore_super" ]);
+		"$bootstrap_super,$restore_super"
+	]
+);
 $envar_node->start;
 
 # make superuser for restore
@@ -157,18 +183,24 @@ my $cmdline_node = get_new_node('destination_cmdline');
 $cmdline_node->init(
 	extra => [ '-U', $bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);
 $cmdline_node->run_log(
-	[   $ENV{PG_REGRESS},        '--config-auth',
+	[
+		$ENV{PG_REGRESS},        '--config-auth',
 		$cmdline_node->data_dir, '--create-role',
-		"$bootstrap_super,$restore_super" ]);
+		"$bootstrap_super,$restore_super"
+	]
+);
 $cmdline_node->start;
 $cmdline_node->run_log(
 	[ 'createuser', '-U', $bootstrap_super, '-s', $restore_super ]);
 {
 	$result = run_log(
-		[   'psql',         '-p', $cmdline_node->port, '-U',
-			$restore_super, '-X', '-f',                $plain ],
+		[
+			'psql',         '-p', $cmdline_node->port, '-U',
+			$restore_super, '-X', '-f',                $plain
+		],
 		'2>',
-		\$stderr);
+		\$stderr
+	);
 }
 ok($result,
 	'restore full dump with command-line options for connection parameters');
diff --git a/src/bin/pg_resetwal/t/002_corrupted.pl b/src/bin/pg_resetwal/t/002_corrupted.pl
index ab840d1..f9a7b3f 100644
--- a/src/bin/pg_resetwal/t/002_corrupted.pl
+++ b/src/bin/pg_resetwal/t/002_corrupted.pl
@@ -31,9 +31,11 @@ command_checks_all(
 	[ 'pg_resetwal', '-n', $node->data_dir ],
 	0,
 	[qr/pg_control version number/],
-	[   qr/pg_resetwal: pg_control exists but is broken or wrong version; ignoring it/
+	[
+		qr/pg_resetwal: pg_control exists but is broken or wrong version; ignoring it/
 	],
-	'processes corrupted pg_control all zeroes');
+	'processes corrupted pg_control all zeroes'
+);
 
 # Put in the previously saved header data.  This uses a different code
 # path internally, allowing us to process a zero WAL segment size.
@@ -46,6 +48,8 @@ command_checks_all(
 	[ 'pg_resetwal', '-n', $node->data_dir ],
 	0,
 	[qr/pg_control version number/],
-	[   qr/\Qpg_resetwal: pg_control specifies invalid WAL segment size (0 bytes); proceed with caution\E/
+	[
+		qr/\Qpg_resetwal: pg_control specifies invalid WAL segment size (0 bytes); proceed with caution\E/
 	],
-	'processes zero WAL segment size');
+	'processes zero WAL segment size'
+);
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index 278ffd8..3e6f03e 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -92,7 +92,8 @@ sub check_query
 	my $result = run [
 		'psql', '-q', '-A', '-t', '--no-psqlrc', '-d',
 		$node_master->connstr('postgres'),
-		'-c', $query ],
+		'-c', $query
+	  ],
 	  '>', \$stdout, '2>', \$stderr;
 
 	# We don't use ok() for the exit code and stderr, because we want this
@@ -128,7 +129,8 @@ sub setup_cluster
 	$node_master->append_conf(
 		'postgresql.conf', qq(
 wal_keep_segments = 20
-));
+)
+	);
 }
 
 sub start_master
@@ -154,7 +156,8 @@ sub create_standby
 primary_conninfo='$connstr_master application_name=rewind_standby'
 standby_mode=on
 recovery_target_timeline='latest'
-));
+)
+	);
 
 	# Start standby
 	$node_standby->start;
@@ -204,7 +207,8 @@ sub run_pg_rewind
 	# overwritten during the rewind.
 	copy(
 		"$master_pgdata/postgresql.conf",
-		"$tmp_folder/master-postgresql.conf.tmp");
+		"$tmp_folder/master-postgresql.conf.tmp"
+	);
 
 	# Now run pg_rewind
 	if ($test_mode eq "local")
@@ -214,21 +218,27 @@ sub run_pg_rewind
 		# Stop the master and be ready to perform the rewind
 		$node_standby->stop;
 		command_ok(
-			[   'pg_rewind',
+			[
+				'pg_rewind',
 				"--debug",
 				"--source-pgdata=$standby_pgdata",
-				"--target-pgdata=$master_pgdata" ],
-			'pg_rewind local');
+				"--target-pgdata=$master_pgdata"
+			],
+			'pg_rewind local'
+		);
 	}
 	elsif ($test_mode eq "remote")
 	{
 
 		# Do rewind using a remote connection as source
 		command_ok(
-			[   'pg_rewind',       "--debug",
+			[
+				'pg_rewind',       "--debug",
 				"--source-server", $standby_connstr,
-				"--target-pgdata=$master_pgdata" ],
-			'pg_rewind remote');
+				"--target-pgdata=$master_pgdata"
+			],
+			'pg_rewind remote'
+		);
 	}
 	else
 	{
@@ -240,11 +250,13 @@ sub run_pg_rewind
 	# Now move back postgresql.conf with old settings
 	move(
 		"$tmp_folder/master-postgresql.conf.tmp",
-		"$master_pgdata/postgresql.conf");
+		"$master_pgdata/postgresql.conf"
+	);
 
 	chmod(
 		$node_master->group_access() ? 0640 : 0600,
-		"$master_pgdata/postgresql.conf")
+		"$master_pgdata/postgresql.conf"
+	  )
 	  or BAIL_OUT(
 		"unable to set permissions for $master_pgdata/postgresql.conf");
 
@@ -255,7 +267,8 @@ sub run_pg_rewind
 primary_conninfo='port=$port_standby'
 standby_mode=on
 recovery_target_timeline='latest'
-));
+)
+	);
 
 	# Restart the master to check that rewind went correctly
 	$node_master->start;
diff --git a/src/bin/pg_rewind/t/001_basic.pl b/src/bin/pg_rewind/t/001_basic.pl
index 87bb71e..3aec3fe 100644
--- a/src/bin/pg_rewind/t/001_basic.pl
+++ b/src/bin/pg_rewind/t/001_basic.pl
@@ -71,20 +71,23 @@ sub run_test
 in master, before promotion
 in standby, after promotion
 ),
-		'table content');
+		'table content'
+	);
 
 	check_query(
 		'SELECT * FROM trunc_tbl',
 		qq(in master
 in master, before promotion
 ),
-		'truncation');
+		'truncation'
+	);
 
 	check_query(
 		'SELECT count(*) FROM tail_tbl',
 		qq(10001
 ),
-		'tail-copy');
+		'tail-copy'
+	);
 
 	# Permissions on PGDATA should be default
   SKIP:
diff --git a/src/bin/pg_rewind/t/002_databases.pl b/src/bin/pg_rewind/t/002_databases.pl
index bef0e17..d368cda 100644
--- a/src/bin/pg_rewind/t/002_databases.pl
+++ b/src/bin/pg_rewind/t/002_databases.pl
@@ -40,7 +40,8 @@ standby_afterpromotion
 template0
 template1
 ),
-		'database names');
+		'database names'
+	);
 
 	# Permissions on PGDATA should have group permissions
   SKIP:
diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl
index 8f4f972..b099190 100644
--- a/src/bin/pg_rewind/t/003_extrafiles.pl
+++ b/src/bin/pg_rewind/t/003_extrafiles.pl
@@ -62,11 +62,13 @@ sub run_test
 			push @paths, $File::Find::name
 			  if $File::Find::name =~ m/.*tst_.*/;
 		},
-		$test_master_datadir);
+		$test_master_datadir
+	);
 	@paths = sort @paths;
 	is_deeply(
 		\@paths,
-		[   "$test_master_datadir/tst_both_dir",
+		[
+			"$test_master_datadir/tst_both_dir",
 			"$test_master_datadir/tst_both_dir/both_file1",
 			"$test_master_datadir/tst_both_dir/both_file2",
 			"$test_master_datadir/tst_both_dir/both_subdir",
@@ -77,7 +79,8 @@ sub run_test
 			"$test_master_datadir/tst_standby_dir/standby_subdir",
 			"$test_master_datadir/tst_standby_dir/standby_subdir/standby_file3"
 		],
-		"file lists match");
+		"file lists match"
+	);
 
 	RewindTest::clean_rewind_test();
 }
diff --git a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
index feadaa6..bff539b 100644
--- a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
+++ b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
@@ -69,7 +69,8 @@ sub run_test
 in master, before promotion
 in standby, after promotion
 ),
-		'table content');
+		'table content'
+	);
 
 	RewindTest::clean_rewind_test();
 }
diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl
index 947f13d..29b2733 100644
--- a/src/bin/pgbench/t/001_pgbench_with_server.pl
+++ b/src/bin/pgbench/t/001_pgbench_with_server.pl
@@ -59,8 +59,11 @@ pgbench(
 	[qr{processed: 125/125}],
 	[qr{^$}],
 	'concurrency OID generation',
-	{   '001_pgbench_concurrent_oid_generation' =>
-		  'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);' });
+	{
+		'001_pgbench_concurrent_oid_generation' =>
+		  'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);'
+	}
+);
 
 # cleanup
 $node->safe_psql('postgres', 'DROP TABLE oid_tbl;');
@@ -70,83 +73,107 @@ pgbench(
 	'no-such-database',
 	1,
 	[qr{^$}],
-	[   qr{connection to database "no-such-database" failed},
-		qr{FATAL:  database "no-such-database" does not exist} ],
-	'no such database');
+	[
+		qr{connection to database "no-such-database" failed},
+		qr{FATAL:  database "no-such-database" does not exist}
+	],
+	'no such database'
+);
 
 pgbench(
 	'-S -t 1', 1, [qr{^$}],
 	[qr{Perhaps you need to do initialization}],
-	'run without init');
+	'run without init'
+);
 
 # Initialize pgbench tables scale 1
 pgbench(
 	'-i', 0,
 	[qr{^$}],
-	[   qr{creating tables},       qr{vacuuming},
-		qr{creating primary keys}, qr{done\.} ],
-	'pgbench scale 1 initialization',);
+	[
+		qr{creating tables},       qr{vacuuming},
+		qr{creating primary keys}, qr{done\.}
+	],
+	'pgbench scale 1 initialization',
+);
 
 # Again, with all possible options
 pgbench(
 	'--initialize --init-steps=dtpvg --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default',
 	0,
 	[qr{^$}i],
-	[   qr{dropping old tables},
+	[
+		qr{dropping old tables},
 		qr{creating tables},
 		qr{vacuuming},
 		qr{creating primary keys},
 		qr{creating foreign keys},
-		qr{done\.} ],
-	'pgbench scale 1 initialization');
+		qr{done\.}
+	],
+	'pgbench scale 1 initialization'
+);
 
 # Test interaction of --init-steps with legacy step-selection options
 pgbench(
 	'--initialize --init-steps=dtpvgvv --no-vacuum --foreign-keys --unlogged-tables',
 	0,
 	[qr{^$}],
-	[   qr{dropping old tables},
+	[
+		qr{dropping old tables},
 		qr{creating tables},
 		qr{creating primary keys},
 		qr{.* of .* tuples \(.*\) done},
 		qr{creating foreign keys},
-		qr{done\.} ],
-	'pgbench --init-steps');
+		qr{done\.}
+	],
+	'pgbench --init-steps'
+);
 
 # Run all builtin scripts, for a few transactions each
 pgbench(
 	'--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t'
 	  . ' --connect -n -v -n',
 	0,
-	[   qr{builtin: TPC-B},
+	[
+		qr{builtin: TPC-B},
 		qr{clients: 2\b},
 		qr{processed: 10/10},
-		qr{mode: simple} ],
+		qr{mode: simple}
+	],
 	[qr{^$}],
-	'pgbench tpcb-like');
+	'pgbench tpcb-like'
+);
 
 pgbench(
 	'--transactions=20 --client=5 -M extended --builtin=si -C --no-vacuum -s 1',
 	0,
-	[   qr{builtin: simple update},
+	[
+		qr{builtin: simple update},
 		qr{clients: 5\b},
 		qr{threads: 1\b},
 		qr{processed: 100/100},
-		qr{mode: extended} ],
+		qr{mode: extended}
+	],
 	[qr{scale option ignored}],
-	'pgbench simple update');
+	'pgbench simple update'
+);
 
 pgbench(
 	'-t 100 -c 7 -M prepared -b se --debug',
 	0,
-	[   qr{builtin: select only},
+	[
+		qr{builtin: select only},
 		qr{clients: 7\b},
 		qr{threads: 1\b},
 		qr{processed: 700/700},
-		qr{mode: prepared} ],
-	[   qr{vacuum},    qr{client 0}, qr{client 1}, qr{sending},
-		qr{receiving}, qr{executing} ],
-	'pgbench select only');
+		qr{mode: prepared}
+	],
+	[
+		qr{vacuum},    qr{client 0}, qr{client 1}, qr{sending},
+		qr{receiving}, qr{executing}
+	],
+	'pgbench select only'
+);
 
 # check if threads are supported
 my $nthreads = 2;
@@ -161,16 +188,19 @@ my $nthreads = 2;
 pgbench(
 	"-t 100 -c 1 -j $nthreads -M prepared -n",
 	0,
-	[   qr{type: multiple scripts},
+	[
+		qr{type: multiple scripts},
 		qr{mode: prepared},
 		qr{script 1: .*/001_pgbench_custom_script_1},
 		qr{weight: 2},
 		qr{script 2: .*/001_pgbench_custom_script_2},
 		qr{weight: 1},
-		qr{processed: 100/100} ],
+		qr{processed: 100/100}
+	],
 	[qr{^$}],
 	'pgbench custom scripts',
-	{   '001_pgbench_custom_script_1@1' => q{-- select only
+	{
+		'001_pgbench_custom_script_1@1' => q{-- select only
 \set aid random(1, :scale * 100000)
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
@@ -182,41 +212,53 @@ BEGIN;
 -- cast are needed for typing under -M prepared
 SELECT :foo::INT + :scale::INT * :client_id::INT AS bla;
 COMMIT;
-} });
+}
+	}
+);
 
 pgbench(
 	'-n -t 10 -c 1 -M simple',
 	0,
-	[   qr{type: .*/001_pgbench_custom_script_3},
+	[
+		qr{type: .*/001_pgbench_custom_script_3},
 		qr{processed: 10/10},
-		qr{mode: simple} ],
+		qr{mode: simple}
+	],
 	[qr{^$}],
 	'pgbench custom script',
-	{   '001_pgbench_custom_script_3' => q{-- select only variant
+	{
+		'001_pgbench_custom_script_3' => q{-- select only variant
 \set aid random(1, :scale * 100000)
 BEGIN;
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
   WHERE aid=:aid;
 COMMIT;
-} });
+}
+	}
+);
 
 pgbench(
 	'-n -t 10 -c 2 -M extended',
 	0,
-	[   qr{type: .*/001_pgbench_custom_script_4},
+	[
+		qr{type: .*/001_pgbench_custom_script_4},
 		qr{processed: 20/20},
-		qr{mode: extended} ],
+		qr{mode: extended}
+	],
 	[qr{^$}],
 	'pgbench custom script',
-	{   '001_pgbench_custom_script_4' => q{-- select only variant
+	{
+		'001_pgbench_custom_script_4' => q{-- select only variant
 \set aid random(1, :scale * 100000)
 BEGIN;
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
   WHERE aid=:aid;
 COMMIT;
-} });
+}
+	}
+);
 
 # test expressions
 # command 1..3 and 23 depend on random seed which is used to call srandom.
@@ -224,7 +266,8 @@ pgbench(
 	'--random-seed=5432 -t 1 -Dfoo=-10.1 -Dbla=false -Di=+3 -Dminint=-9223372036854775808 -Dn=null -Dt=t -Df=of -Dd=1.0',
 	0,
 	[ qr{type: .*/001_pgbench_expressions}, qr{processed: 1/1} ],
-	[   qr{setting random seed to 5432\b},
+	[
+		qr{setting random seed to 5432\b},
 
 		# After explicit seeding, the four * random checks (1-3,20) should be
 		# deterministic, but not necessarily portable.
@@ -289,7 +332,8 @@ pgbench(
 		qr{command=98.: int 5432\b},    # :random_seed
 	],
 	'pgbench expressions',
-	{   '001_pgbench_expressions' => q{-- integer functions
+	{
+		'001_pgbench_expressions' => q{-- integer functions
 \set i1 debug(random(10, 19))
 \set i2 debug(random_exponential(100, 199, 10.0))
 \set i3 debug(random_gaussian(1000, 1999, 10.0))
@@ -411,7 +455,9 @@ SELECT :v0, :v1, :v2, :v3;
 \set sc debug(:scale)
 \set ci debug(:client_id)
 \set rs debug(:random_seed)
-} });
+}
+	}
+);
 
 # random determinism when seeded
 $node->safe_psql('postgres',
@@ -428,7 +474,8 @@ for my $i (1, 2)
 		[qr{processed: 1/1}],
 		[qr{setting random seed to $seed\b}],
 		"random seeded with $seed",
-		{   "001_pgbench_random_seed_$i" => q{-- test random functions
+		{
+			"001_pgbench_random_seed_$i" => q{-- test random functions
 \set ur random(1000, 1999)
 \set er random_exponential(2000, 2999, 2.0)
 \set gr random_gaussian(3000, 3999, 3.0)
@@ -438,7 +485,9 @@ INSERT INTO seeded_random(seed, rand, val) VALUES
   (:random_seed, 'exponential', :er),
   (:random_seed, 'gaussian', :gr),
   (:random_seed, 'zipfian', :zr);
-} });
+}
+		}
+	);
 }
 
 # check that all runs generated the same 4 values
@@ -450,10 +499,14 @@ ok($ret == 0,  "psql seeded_random count ok");
 ok($err eq '', "psql seeded_random count stderr is empty");
 ok($out =~ /\b$seed\|uniform\|1\d\d\d\|2/,
 	"psql seeded_random count uniform");
-ok( $out =~ /\b$seed\|exponential\|2\d\d\d\|2/,
-	"psql seeded_random count exponential");
-ok( $out =~ /\b$seed\|gaussian\|3\d\d\d\|2/,
-	"psql seeded_random count gaussian");
+ok(
+	$out =~ /\b$seed\|exponential\|2\d\d\d\|2/,
+	"psql seeded_random count exponential"
+);
+ok(
+	$out =~ /\b$seed\|gaussian\|3\d\d\d\|2/,
+	"psql seeded_random count gaussian"
+);
 ok($out =~ /\b$seed\|zipfian\|4\d\d\d\|2/,
 	"psql seeded_random count zipfian");
 
@@ -462,12 +515,15 @@ $node->safe_psql('postgres', 'DROP TABLE seeded_random;');
 # backslash commands
 pgbench(
 	'-t 1', 0,
-	[   qr{type: .*/001_pgbench_backslash_commands},
+	[
+		qr{type: .*/001_pgbench_backslash_commands},
 		qr{processed: 1/1},
-		qr{shell-echo-output} ],
+		qr{shell-echo-output}
+	],
 	[qr{command=8.: int 2\b}],
 	'pgbench backslash commands',
-	{   '001_pgbench_backslash_commands' => q{-- run set
+	{
+		'001_pgbench_backslash_commands' => q{-- run set
 \set zero 0
 \set one 1.0
 -- sleep
@@ -482,36 +538,49 @@ pgbench(
 \set n debug(:two)
 -- shell
 \shell echo shell-echo-output
-} });
+}
+	}
+);
 
 # trigger many expression errors
 my @errors = (
 
 	# [ test name, script number, status, stderr match ]
 	# SQL
-	[   'sql syntax error',
+	[
+		'sql syntax error',
 		0,
-		[   qr{ERROR:  syntax error},
-			qr{prepared statement .* does not exist} ],
+		[
+			qr{ERROR:  syntax error},
+			qr{prepared statement .* does not exist}
+		],
 		q{-- SQL syntax error
     SELECT 1 + ;
-} ],
-	[   'sql too many args', 1, [qr{statement has too many arguments.*\b9\b}],
+}
+	],
+	[
+		'sql too many args', 1, [qr{statement has too many arguments.*\b9\b}],
 		q{-- MAX_ARGS=10 for prepared
 \set i 0
 SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);
-} ],
+}
+	],
 
 	# SHELL
-	[   'shell bad command',                    0,
-		[qr{\(shell\) .* meta-command failed}], q{\shell no-such-command} ],
-	[   'shell undefined variable', 0,
+	[
+		'shell bad command',                    0,
+		[qr{\(shell\) .* meta-command failed}], q{\shell no-such-command}
+	],
+	[
+		'shell undefined variable', 0,
 		[qr{undefined variable ":nosuchvariable"}],
 		q{-- undefined variable in shell
 \shell echo ::foo :nosuchvariable
-} ],
+}
+	],
 	[ 'shell missing command', 1, [qr{missing command }], q{\shell} ],
-	[   'shell too many args', 1, [qr{too many arguments in command "shell"}],
+	[
+		'shell too many args', 1, [qr{too many arguments in command "shell"}],
 		q{-- 257 arguments to \shell
 \shell echo \
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
@@ -530,95 +599,155 @@ SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
  0 1 2 3 4 5 6 7 8 9 A B C D E F
-} ],
+}
+	],
 
 	# SET
-	[   'set syntax error',                  1,
-		[qr{syntax error in command "set"}], q{\set i 1 +} ],
-	[   'set no such function',         1,
-		[qr{unexpected function name}], q{\set i noSuchFunction()} ],
-	[   'set invalid variable name', 0,
-		[qr{invalid variable name}], q{\set . 1} ],
-	[   'set int overflow',                   0,
-		[qr{double to int overflow for 100}], q{\set i int(1E32)} ],
+	[
+		'set syntax error',                  1,
+		[qr{syntax error in command "set"}], q{\set i 1 +}
+	],
+	[
+		'set no such function',         1,
+		[qr{unexpected function name}], q{\set i noSuchFunction()}
+	],
+	[
+		'set invalid variable name', 0,
+		[qr{invalid variable name}], q{\set . 1}
+	],
+	[
+		'set int overflow',                   0,
+		[qr{double to int overflow for 100}], q{\set i int(1E32)}
+	],
 	[ 'set division by zero', 0, [qr{division by zero}], q{\set i 1/0} ],
-	[   'set bigint out of range', 0,
-		[qr{bigint out of range}], q{\set i 9223372036854775808 / -1} ],
-	[   'set undefined variable',
+	[
+		'set bigint out of range', 0,
+		[qr{bigint out of range}], q{\set i 9223372036854775808 / -1}
+	],
+	[
+		'set undefined variable',
 		0,
 		[qr{undefined variable "nosuchvariable"}],
-		q{\set i :nosuchvariable} ],
+		q{\set i :nosuchvariable}
+	],
 	[ 'set unexpected char', 1, [qr{unexpected character .;.}], q{\set i ;} ],
-	[   'set too many args',
+	[
+		'set too many args',
 		0,
 		[qr{too many function arguments}],
-		q{\set i least(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)} ],
-	[   'set empty random range',          0,
-		[qr{empty range given to random}], q{\set i random(5,3)} ],
-	[   'set random range too large',
+		q{\set i least(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)}
+	],
+	[
+		'set empty random range',          0,
+		[qr{empty range given to random}], q{\set i random(5,3)}
+	],
+	[
+		'set random range too large',
 		0,
 		[qr{random range is too large}],
-		q{\set i random(-9223372036854775808, 9223372036854775807)} ],
-	[   'set gaussian param too small',
+		q{\set i random(-9223372036854775808, 9223372036854775807)}
+	],
+	[
+		'set gaussian param too small',
 		0,
 		[qr{gaussian param.* at least 2}],
-		q{\set i random_gaussian(0, 10, 1.0)} ],
-	[   'set exponential param greater 0',
+		q{\set i random_gaussian(0, 10, 1.0)}
+	],
+	[
+		'set exponential param greater 0',
 		0,
 		[qr{exponential parameter must be greater }],
-		q{\set i random_exponential(0, 10, 0.0)} ],
-	[   'set zipfian param to 1',
+		q{\set i random_exponential(0, 10, 0.0)}
+	],
+	[
+		'set zipfian param to 1',
 		0,
 		[qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}],
-		q{\set i random_zipfian(0, 10, 1)} ],
-	[   'set zipfian param too large',
+		q{\set i random_zipfian(0, 10, 1)}
+	],
+	[
+		'set zipfian param too large',
 		0,
 		[qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}],
-		q{\set i random_zipfian(0, 10, 1000000)} ],
-	[   'set non numeric value',                     0,
-		[qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1} ],
+		q{\set i random_zipfian(0, 10, 1000000)}
+	],
+	[
+		'set non numeric value',                     0,
+		[qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1}
+	],
 	[ 'set no expression',    1, [qr{syntax error}],      q{\set i} ],
 	[ 'set missing argument', 1, [qr{missing argument}i], q{\set} ],
-	[   'set not a bool',                      0,
-		[qr{cannot coerce double to boolean}], q{\set b NOT 0.0} ],
-	[   'set not an int',                   0,
-		[qr{cannot coerce boolean to int}], q{\set i TRUE + 2} ],
-	[   'set not a double',                    0,
-		[qr{cannot coerce boolean to double}], q{\set d ln(TRUE)} ],
-	[   'set case error',
+	[
+		'set not a bool',                      0,
+		[qr{cannot coerce double to boolean}], q{\set b NOT 0.0}
+	],
+	[
+		'set not an int',                   0,
+		[qr{cannot coerce boolean to int}], q{\set i TRUE + 2}
+	],
+	[
+		'set not a double',                    0,
+		[qr{cannot coerce boolean to double}], q{\set d ln(TRUE)}
+	],
+	[
+		'set case error',
 		1,
 		[qr{syntax error in command "set"}],
-		q{\set i CASE TRUE THEN 1 ELSE 0 END} ],
-	[   'set random error',                 0,
-		[qr{cannot coerce boolean to int}], q{\set b random(FALSE, TRUE)} ],
-	[   'set number of args mismatch',        1,
-		[qr{unexpected number of arguments}], q{\set d ln(1.0, 2.0))} ],
-	[   'set at least one arg',               1,
-		[qr{at least one argument expected}], q{\set i greatest())} ],
+		q{\set i CASE TRUE THEN 1 ELSE 0 END}
+	],
+	[
+		'set random error',                 0,
+		[qr{cannot coerce boolean to int}], q{\set b random(FALSE, TRUE)}
+	],
+	[
+		'set number of args mismatch',        1,
+		[qr{unexpected number of arguments}], q{\set d ln(1.0, 2.0))}
+	],
+	[
+		'set at least one arg',               1,
+		[qr{at least one argument expected}], q{\set i greatest())}
+	],
 
 	# SETSHELL
-	[   'setshell not an int',                0,
-		[qr{command must return an integer}], q{\setshell i echo -n one} ],
+	[
+		'setshell not an int',                0,
+		[qr{command must return an integer}], q{\setshell i echo -n one}
+	],
 	[ 'setshell missing arg', 1, [qr{missing argument }], q{\setshell var} ],
-	[   'setshell no such command',   0,
-		[qr{could not read result }], q{\setshell var no-such-command} ],
+	[
+		'setshell no such command',   0,
+		[qr{could not read result }], q{\setshell var no-such-command}
+	],
 
 	# SLEEP
-	[   'sleep undefined variable',      0,
-		[qr{sleep: undefined variable}], q{\sleep :nosuchvariable} ],
-	[   'sleep too many args',    1,
-		[qr{too many arguments}], q{\sleep too many args} ],
-	[   'sleep missing arg', 1,
-		[ qr{missing argument}, qr{\\sleep} ], q{\sleep} ],
-	[   'sleep unknown unit',         1,
-		[qr{unrecognized time unit}], q{\sleep 1 week} ],
+	[
+		'sleep undefined variable',      0,
+		[qr{sleep: undefined variable}], q{\sleep :nosuchvariable}
+	],
+	[
+		'sleep too many args',    1,
+		[qr{too many arguments}], q{\sleep too many args}
+	],
+	[
+		'sleep missing arg', 1,
+		[ qr{missing argument}, qr{\\sleep} ], q{\sleep}
+	],
+	[
+		'sleep unknown unit',         1,
+		[qr{unrecognized time unit}], q{\sleep 1 week}
+	],
 
 	# MISC
-	[   'misc invalid backslash command',         1,
-		[qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand} ],
+	[
+		'misc invalid backslash command',         1,
+		[qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand}
+	],
 	[ 'misc empty script', 1, [qr{empty command list for script}], q{} ],
-	[   'bad boolean',                     0,
-		[qr{malformed variable.*trueXXX}], q{\set b :badtrue or true} ],);
+	[
+		'bad boolean',                     0,
+		[qr{malformed variable.*trueXXX}], q{\set b :badtrue or true}
+	],
+);
 
 
 for my $e (@errors)
@@ -632,7 +761,8 @@ for my $e (@errors)
 		[ $status ? qr{^$} : qr{processed: 0/1} ],
 		$re,
 		'pgbench script error: ' . $name,
-		{ $n => $script });
+		{ $n => $script }
+	);
 }
 
 # zipfian cache array overflow
@@ -641,7 +771,8 @@ pgbench(
 	[ qr{processed: 1/1}, qr{zipfian cache array overflowed 1 time\(s\)} ],
 	[qr{^}],
 	'pgbench zipfian array overflow on random_zipfian',
-	{   '001_pgbench_random_zipfian' => q{
+	{
+		'001_pgbench_random_zipfian' => q{
 \set i random_zipfian(1, 100, 0.5)
 \set i random_zipfian(2, 100, 0.5)
 \set i random_zipfian(3, 100, 0.5)
@@ -658,7 +789,9 @@ pgbench(
 \set i random_zipfian(14, 100, 0.5)
 \set i random_zipfian(15, 100, 0.5)
 \set i random_zipfian(16, 100, 0.5)
-} });
+}
+	}
+);
 
 # throttling
 pgbench(
@@ -666,19 +799,23 @@ pgbench(
 	0,
 	[ qr{processed: 200/200}, qr{builtin: select only} ],
 	[qr{^$}],
-	'pgbench throttling');
+	'pgbench throttling'
+);
 
 pgbench(
 
 	# given the expected rate and the 2 ms tx duration, at most one is executed
 	'-t 10 --rate=100000 --latency-limit=1 -n -r',
 	0,
-	[   qr{processed: [01]/10},
+	[
+		qr{processed: [01]/10},
 		qr{type: .*/001_pgbench_sleep},
-		qr{above the 1.0 ms latency limit: [01]/} ],
+		qr{above the 1.0 ms latency limit: [01]/}
+	],
 	[qr{^$}i],
 	'pgbench late throttling',
-	{ '001_pgbench_sleep' => q{\sleep 2ms} });
+	{ '001_pgbench_sleep' => q{\sleep 2ms} }
+);
 
 # check log contents and cleanup
 sub check_pgbench_logs
@@ -696,10 +833,14 @@ sub check_pgbench_logs
 			open my $fh, '<', $log or die "$@";
 			my @contents = <$fh>;
 			my $clen     = @contents;
-			ok( $min <= $clen && $clen <= $max,
-				"transaction count for $log ($clen)");
-			ok( grep($re, @contents) == $clen,
-				"transaction format for $prefix");
+			ok(
+				$min <= $clen && $clen <= $max,
+				"transaction count for $log ($clen)"
+			);
+			ok(
+				grep($re, @contents) == $clen,
+				"transaction format for $prefix"
+			);
 			close $fh or die "$@";
 		};
 	}
@@ -714,7 +855,8 @@ pgbench(
 	0,
 	[ qr{select only}, qr{processed: 100/100} ],
 	[qr{^$}],
-	'pgbench logs');
+	'pgbench logs'
+);
 
 check_pgbench_logs("$bdir/001_pgbench_log_2", 1, 8, 92,
 	qr{^0 \d{1,2} \d+ \d \d+ \d+$});
@@ -723,7 +865,8 @@ check_pgbench_logs("$bdir/001_pgbench_log_2", 1, 8, 92,
 pgbench(
 	"-n -b se -t 10 -l --log-prefix=$bdir/001_pgbench_log_3",
 	0, [ qr{select only}, qr{processed: 10/10} ],
-	[qr{^$}], 'pgbench logs contents');
+	[qr{^$}], 'pgbench logs contents'
+);
 
 check_pgbench_logs("$bdir/001_pgbench_log_3", 1, 10, 10,
 	qr{^\d \d{1,2} \d+ \d \d+ \d+$});
diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl
index 7dcc812..91c10e1 100644
--- a/src/bin/pgbench/t/002_pgbench_no_server.pl
+++ b/src/bin/pgbench/t/002_pgbench_no_server.pl
@@ -57,81 +57,127 @@ sub pgbench_scripts
 my @options = (
 
 	# name, options, stderr checks
-	[   'bad option',
+	[
+		'bad option',
 		'-h home -p 5432 -U calvin -d --bad-option',
-		[ qr{(unrecognized|illegal) option}, qr{--help.*more information} ] ],
-	[   'no file',
+		[ qr{(unrecognized|illegal) option}, qr{--help.*more information} ]
+	],
+	[
+		'no file',
 		'-f no-such-file',
-		[qr{could not open file "no-such-file":}] ],
-	[   'no builtin',
+		[qr{could not open file "no-such-file":}]
+	],
+	[
+		'no builtin',
 		'-b no-such-builtin',
-		[qr{no builtin script .* "no-such-builtin"}] ],
-	[   'invalid weight',
+		[qr{no builtin script .* "no-such-builtin"}]
+	],
+	[
+		'invalid weight',
 		'--builtin=select-only@one',
-		[qr{invalid weight specification: \@one}] ],
-	[   'invalid weight',
+		[qr{invalid weight specification: \@one}]
+	],
+	[
+		'invalid weight',
 		'-b select-only@-1',
-		[qr{weight spec.* out of range .*: -1}] ],
+		[qr{weight spec.* out of range .*: -1}]
+	],
 	[ 'too many scripts', '-S ' x 129, [qr{at most 128 SQL scripts}] ],
 	[ 'bad #clients', '-c three', [qr{invalid number of clients: "three"}] ],
-	[   'bad #threads', '-j eleven', [qr{invalid number of threads: "eleven"}]
+	[
+		'bad #threads', '-j eleven', [qr{invalid number of threads: "eleven"}]
 	],
 	[ 'bad scale', '-i -s two', [qr{invalid scaling factor: "two"}] ],
-	[   'invalid #transactions',
+	[
+		'invalid #transactions',
 		'-t zil',
-		[qr{invalid number of transactions: "zil"}] ],
+		[qr{invalid number of transactions: "zil"}]
+	],
 	[ 'invalid duration', '-T ten', [qr{invalid duration: "ten"}] ],
-	[   '-t XOR -T',
+	[
+		'-t XOR -T',
 		'-N -l --aggregate-interval=5 --log-prefix=notused -t 1000 -T 1',
-		[qr{specify either }] ],
-	[   '-T XOR -t',
+		[qr{specify either }]
+	],
+	[
+		'-T XOR -t',
 		'-P 1 --progress-timestamp -l --sampling-rate=0.001 -T 10 -t 1000',
-		[qr{specify either }] ],
+		[qr{specify either }]
+	],
 	[ 'bad variable', '--define foobla', [qr{invalid variable definition}] ],
 	[ 'invalid fillfactor', '-F 1',            [qr{invalid fillfactor}] ],
 	[ 'invalid query mode', '-M no-such-mode', [qr{invalid query mode}] ],
-	[   'invalid progress', '--progress=0',
-		[qr{invalid thread progress delay}] ],
+	[
+		'invalid progress', '--progress=0',
+		[qr{invalid thread progress delay}]
+	],
 	[ 'invalid rate',    '--rate=0.0',          [qr{invalid rate limit}] ],
 	[ 'invalid latency', '--latency-limit=0.0', [qr{invalid latency limit}] ],
-	[   'invalid sampling rate', '--sampling-rate=0',
-		[qr{invalid sampling rate}] ],
-	[   'invalid aggregate interval', '--aggregate-interval=-3',
-		[qr{invalid .* seconds for}] ],
-	[   'weight zero',
+	[
+		'invalid sampling rate', '--sampling-rate=0',
+		[qr{invalid sampling rate}]
+	],
+	[
+		'invalid aggregate interval', '--aggregate-interval=-3',
+		[qr{invalid .* seconds for}]
+	],
+	[
+		'weight zero',
 		'-b se@0 -b si@0 -b tpcb@0',
-		[qr{weight must not be zero}] ],
+		[qr{weight must not be zero}]
+	],
 	[ 'init vs run', '-i -S',    [qr{cannot be used in initialization}] ],
 	[ 'run vs init', '-S -F 90', [qr{cannot be used in benchmarking}] ],
 	[ 'ambiguous builtin', '-b s', [qr{ambiguous}] ],
-	[   '--progress-timestamp => --progress', '--progress-timestamp',
-		[qr{allowed only under}] ],
-	[   '-I without init option',
+	[
+		'--progress-timestamp => --progress', '--progress-timestamp',
+		[qr{allowed only under}]
+	],
+	[
+		'-I without init option',
 		'-I dtg',
-		[qr{cannot be used in benchmarking mode}] ],
-	[   'invalid init step',
+		[qr{cannot be used in benchmarking mode}]
+	],
+	[
+		'invalid init step',
 		'-i -I dta',
-		[ qr{unrecognized initialization step}, qr{allowed steps are} ] ],
-	[   'bad random seed',
+		[ qr{unrecognized initialization step}, qr{allowed steps are} ]
+	],
+	[
+		'bad random seed',
 		'--random-seed=one',
-		[   qr{unrecognized random seed option "one": expecting an unsigned integer, "time" or "rand"},
-			qr{error while setting random seed from --random-seed option} ] ],
+		[
+			qr{unrecognized random seed option "one": expecting an unsigned integer, "time" or "rand"},
+			qr{error while setting random seed from --random-seed option}
+		]
+	],
 
 	# loging sub-options
-	[   'sampling => log', '--sampling-rate=0.01',
-		[qr{log sampling .* only when}] ],
-	[   'sampling XOR aggregate',
+	[
+		'sampling => log', '--sampling-rate=0.01',
+		[qr{log sampling .* only when}]
+	],
+	[
+		'sampling XOR aggregate',
 		'-l --sampling-rate=0.1 --aggregate-interval=3',
-		[qr{sampling .* aggregation .* cannot be used at the same time}] ],
-	[   'aggregate => log', '--aggregate-interval=3',
-		[qr{aggregation .* only when}] ],
+		[qr{sampling .* aggregation .* cannot be used at the same time}]
+	],
+	[
+		'aggregate => log', '--aggregate-interval=3',
+		[qr{aggregation .* only when}]
+	],
 	[ 'log-prefix => log', '--log-prefix=x', [qr{prefix .* only when}] ],
-	[   'duration & aggregation',
+	[
+		'duration & aggregation',
 		'-l -T 1 --aggregate-interval=3',
-		[qr{aggr.* not be higher}] ],
-	[   'duration % aggregation',
+		[qr{aggr.* not be higher}]
+	],
+	[
+		'duration % aggregation',
 		'-l -T 5 --aggregate-interval=3',
-		[qr{multiple}] ],);
+		[qr{multiple}]
+	],
+);
 
 for my $o (@options)
 {
@@ -143,13 +189,16 @@ for my $o (@options)
 # Help
 pgbench(
 	'--help', 0,
-	[   qr{benchmarking tool for PostgreSQL},
+	[
+		qr{benchmarking tool for PostgreSQL},
 		qr{Usage},
 		qr{Initialization options:},
 		qr{Common options:},
-		qr{Report bugs to} ],
+		qr{Report bugs to}
+	],
 	[qr{^$}],
-	'pgbench help');
+	'pgbench help'
+);
 
 # Version
 pgbench('-V', 0, [qr{^pgbench .PostgreSQL. }], [qr{^$}], 'pgbench version');
@@ -159,43 +208,67 @@ pgbench(
 	'-b list',
 	0,
 	[qr{^$}],
-	[   qr{Available builtin scripts:}, qr{tpcb-like},
-		qr{simple-update},              qr{select-only} ],
-	'pgbench builtin list');
+	[
+		qr{Available builtin scripts:}, qr{tpcb-like},
+		qr{simple-update},              qr{select-only}
+	],
+	'pgbench builtin list'
+);
 
 my @script_tests = (
 
 	# name, err, { file => contents }
-	[   'missing endif',
+	[
+		'missing endif',
 		[qr{\\if without matching \\endif}],
-		{ 'if-noendif.sql' => '\if 1' } ],
-	[   'missing if on elif',
+		{ 'if-noendif.sql' => '\if 1' }
+	],
+	[
+		'missing if on elif',
 		[qr{\\elif without matching \\if}],
-		{ 'elif-noif.sql' => '\elif 1' } ],
-	[   'missing if on else',
+		{ 'elif-noif.sql' => '\elif 1' }
+	],
+	[
+		'missing if on else',
 		[qr{\\else without matching \\if}],
-		{ 'else-noif.sql' => '\else' } ],
-	[   'missing if on endif',
+		{ 'else-noif.sql' => '\else' }
+	],
+	[
+		'missing if on endif',
 		[qr{\\endif without matching \\if}],
-		{ 'endif-noif.sql' => '\endif' } ],
-	[   'elif after else',
+		{ 'endif-noif.sql' => '\endif' }
+	],
+	[
+		'elif after else',
 		[qr{\\elif after \\else}],
-		{ 'else-elif.sql' => "\\if 1\n\\else\n\\elif 0\n\\endif" } ],
-	[   'else after else',
+		{ 'else-elif.sql' => "\\if 1\n\\else\n\\elif 0\n\\endif" }
+	],
+	[
+		'else after else',
 		[qr{\\else after \\else}],
-		{ 'else-else.sql' => "\\if 1\n\\else\n\\else\n\\endif" } ],
-	[   'if syntax error',
+		{ 'else-else.sql' => "\\if 1\n\\else\n\\else\n\\endif" }
+	],
+	[
+		'if syntax error',
 		[qr{syntax error in command "if"}],
-		{ 'if-bad.sql' => "\\if\n\\endif\n" } ],
-	[   'elif syntax error',
+		{ 'if-bad.sql' => "\\if\n\\endif\n" }
+	],
+	[
+		'elif syntax error',
 		[qr{syntax error in command "elif"}],
-		{ 'elif-bad.sql' => "\\if 0\n\\elif +\n\\endif\n" } ],
-	[   'else syntax error',
+		{ 'elif-bad.sql' => "\\if 0\n\\elif +\n\\endif\n" }
+	],
+	[
+		'else syntax error',
 		[qr{unexpected argument in command "else"}],
-		{ 'else-bad.sql' => "\\if 0\n\\else BAD\n\\endif\n" } ],
-	[   'endif syntax error',
+		{ 'else-bad.sql' => "\\if 0\n\\else BAD\n\\endif\n" }
+	],
+	[
+		'endif syntax error',
 		[qr{unexpected argument in command "endif"}],
-		{ 'endif-bad.sql' => "\\if 0\n\\endif BAD\n" } ],);
+		{ 'endif-bad.sql' => "\\if 0\n\\endif BAD\n" }
+	],
+);
 
 for my $t (@script_tests)
 {
diff --git a/src/bin/psql/create_help.pl b/src/bin/psql/create_help.pl
index cb0e6e8..08ed032 100644
--- a/src/bin/psql/create_help.pl
+++ b/src/bin/psql/create_help.pl
@@ -149,7 +149,8 @@ foreach my $file (sort readdir DIR)
 				cmddesc     => $cmddesc,
 				cmdsynopsis => $cmdsynopsis,
 				params      => \@params,
-				nl_count    => $nl_count };
+				nl_count    => $nl_count
+			};
 			$maxlen =
 			  ($maxlen >= length $cmdname) ? $maxlen : length $cmdname;
 		}
diff --git a/src/bin/scripts/t/010_clusterdb.pl b/src/bin/scripts/t/010_clusterdb.pl
index ba093fa..4d1157d 100644
--- a/src/bin/scripts/t/010_clusterdb.pl
+++ b/src/bin/scripts/t/010_clusterdb.pl
@@ -16,7 +16,8 @@ $node->start;
 $node->issues_sql_like(
 	['clusterdb'],
 	qr/statement: CLUSTER;/,
-	'SQL CLUSTER run');
+	'SQL CLUSTER run'
+);
 
 $node->command_fails([ 'clusterdb', '-t', 'nonexistent' ],
 	'fails with nonexistent table');
@@ -27,7 +28,8 @@ $node->safe_psql('postgres',
 $node->issues_sql_like(
 	[ 'clusterdb', '-t', 'test1' ],
 	qr/statement: CLUSTER public\.test1;/,
-	'cluster specific table');
+	'cluster specific table'
+);
 
 $node->command_ok([qw(clusterdb --echo --verbose dbname=template1)],
 	'clusterdb with connection string');
diff --git a/src/bin/scripts/t/011_clusterdb_all.pl b/src/bin/scripts/t/011_clusterdb_all.pl
index efd541b..6de273b 100644
--- a/src/bin/scripts/t/011_clusterdb_all.pl
+++ b/src/bin/scripts/t/011_clusterdb_all.pl
@@ -16,4 +16,5 @@ $ENV{PGDATABASE} = 'postgres';
 $node->issues_sql_like(
 	[ 'clusterdb', '-a' ],
 	qr/statement: CLUSTER.*statement: CLUSTER/s,
-	'cluster all databases');
+	'cluster all databases'
+);
diff --git a/src/bin/scripts/t/020_createdb.pl b/src/bin/scripts/t/020_createdb.pl
index c0f6067..ed7ffa5 100644
--- a/src/bin/scripts/t/020_createdb.pl
+++ b/src/bin/scripts/t/020_createdb.pl
@@ -16,11 +16,13 @@ $node->start;
 $node->issues_sql_like(
 	[ 'createdb', 'foobar1' ],
 	qr/statement: CREATE DATABASE foobar1/,
-	'SQL CREATE DATABASE run');
+	'SQL CREATE DATABASE run'
+);
 $node->issues_sql_like(
 	[ 'createdb', '-l', 'C', '-E', 'LATIN1', '-T', 'template0', 'foobar2' ],
 	qr/statement: CREATE DATABASE foobar2 ENCODING 'LATIN1'/,
-	'create database with encoding');
+	'create database with encoding'
+);
 
 $node->command_fails([ 'createdb', 'foobar1' ],
 	'fails if database already exists');
diff --git a/src/bin/scripts/t/040_createuser.pl b/src/bin/scripts/t/040_createuser.pl
index 916d925..3a57801 100644
--- a/src/bin/scripts/t/040_createuser.pl
+++ b/src/bin/scripts/t/040_createuser.pl
@@ -16,19 +16,23 @@ $node->start;
 $node->issues_sql_like(
 	[ 'createuser', 'regress_user1' ],
 	qr/statement: CREATE ROLE regress_user1 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;/,
-	'SQL CREATE USER run');
+	'SQL CREATE USER run'
+);
 $node->issues_sql_like(
 	[ 'createuser', '-L', 'regress_role1' ],
 	qr/statement: CREATE ROLE regress_role1 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOLOGIN;/,
-	'create a non-login role');
+	'create a non-login role'
+);
 $node->issues_sql_like(
 	[ 'createuser', '-r', 'regress_user2' ],
 	qr/statement: CREATE ROLE regress_user2 NOSUPERUSER NOCREATEDB CREATEROLE INHERIT LOGIN;/,
-	'create a CREATEROLE user');
+	'create a CREATEROLE user'
+);
 $node->issues_sql_like(
 	[ 'createuser', '-s', 'regress_user3' ],
 	qr/statement: CREATE ROLE regress_user3 SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;/,
-	'create a superuser');
+	'create a superuser'
+);
 
 $node->command_fails([ 'createuser', 'regress_user1' ],
 	'fails if role already exists');
diff --git a/src/bin/scripts/t/050_dropdb.pl b/src/bin/scripts/t/050_dropdb.pl
index 25aa54a..2863420 100644
--- a/src/bin/scripts/t/050_dropdb.pl
+++ b/src/bin/scripts/t/050_dropdb.pl
@@ -17,7 +17,8 @@ $node->safe_psql('postgres', 'CREATE DATABASE foobar1');
 $node->issues_sql_like(
 	[ 'dropdb', 'foobar1' ],
 	qr/statement: DROP DATABASE foobar1/,
-	'SQL DROP DATABASE run');
+	'SQL DROP DATABASE run'
+);
 
 $node->command_fails([ 'dropdb', 'nonexistent' ],
 	'fails with nonexistent database');
diff --git a/src/bin/scripts/t/070_dropuser.pl b/src/bin/scripts/t/070_dropuser.pl
index 2e858c5..f6caa9e 100644
--- a/src/bin/scripts/t/070_dropuser.pl
+++ b/src/bin/scripts/t/070_dropuser.pl
@@ -17,7 +17,8 @@ $node->safe_psql('postgres', 'CREATE ROLE regress_foobar1');
 $node->issues_sql_like(
 	[ 'dropuser', 'regress_foobar1' ],
 	qr/statement: DROP ROLE regress_foobar1/,
-	'SQL DROP ROLE run');
+	'SQL DROP ROLE run'
+);
 
 $node->command_fails([ 'dropuser', 'regress_nonexistent' ],
 	'fails with nonexistent user');
diff --git a/src/bin/scripts/t/090_reindexdb.pl b/src/bin/scripts/t/090_reindexdb.pl
index e57a5e2..6412a27 100644
--- a/src/bin/scripts/t/090_reindexdb.pl
+++ b/src/bin/scripts/t/090_reindexdb.pl
@@ -18,36 +18,44 @@ $ENV{PGOPTIONS} = '--client-min-messages=WARNING';
 $node->issues_sql_like(
 	[ 'reindexdb', 'postgres' ],
 	qr/statement: REINDEX DATABASE postgres;/,
-	'SQL REINDEX run');
+	'SQL REINDEX run'
+);
 
 $node->safe_psql('postgres',
 	'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a);');
 $node->issues_sql_like(
 	[ 'reindexdb', '-t', 'test1', 'postgres' ],
 	qr/statement: REINDEX TABLE public\.test1;/,
-	'reindex specific table');
+	'reindex specific table'
+);
 $node->issues_sql_like(
 	[ 'reindexdb', '-i', 'test1x', 'postgres' ],
 	qr/statement: REINDEX INDEX public\.test1x;/,
-	'reindex specific index');
+	'reindex specific index'
+);
 $node->issues_sql_like(
 	[ 'reindexdb', '-S', 'pg_catalog', 'postgres' ],
 	qr/statement: REINDEX SCHEMA pg_catalog;/,
-	'reindex specific schema');
+	'reindex specific schema'
+);
 $node->issues_sql_like(
 	[ 'reindexdb', '-s', 'postgres' ],
 	qr/statement: REINDEX SYSTEM postgres;/,
-	'reindex system tables');
+	'reindex system tables'
+);
 $node->issues_sql_like(
 	[ 'reindexdb', '-v', '-t', 'test1', 'postgres' ],
 	qr/statement: REINDEX \(VERBOSE\) TABLE public\.test1;/,
-	'reindex with verbose output');
+	'reindex with verbose output'
+);
 
 $node->command_ok([qw(reindexdb --echo --table=pg_am dbname=template1)],
 	'reindexdb table with connection string');
 $node->command_ok(
 	[qw(reindexdb --echo dbname=template1)],
-	'reindexdb database with connection string');
+	'reindexdb database with connection string'
+);
 $node->command_ok(
 	[qw(reindexdb --echo --system dbname=template1)],
-	'reindexdb system with connection string');
+	'reindexdb system with connection string'
+);
diff --git a/src/bin/scripts/t/091_reindexdb_all.pl b/src/bin/scripts/t/091_reindexdb_all.pl
index 8e60414..aae38cd 100644
--- a/src/bin/scripts/t/091_reindexdb_all.pl
+++ b/src/bin/scripts/t/091_reindexdb_all.pl
@@ -13,4 +13,5 @@ $ENV{PGOPTIONS} = '--client-min-messages=WARNING';
 $node->issues_sql_like(
 	[ 'reindexdb', '-a' ],
 	qr/statement: REINDEX.*statement: REINDEX/s,
-	'reindex all databases');
+	'reindex all databases'
+);
diff --git a/src/bin/scripts/t/100_vacuumdb.pl b/src/bin/scripts/t/100_vacuumdb.pl
index 4c477a2..bae1d75 100644
--- a/src/bin/scripts/t/100_vacuumdb.pl
+++ b/src/bin/scripts/t/100_vacuumdb.pl
@@ -16,34 +16,41 @@ $node->start;
 $node->issues_sql_like(
 	[ 'vacuumdb', 'postgres' ],
 	qr/statement: VACUUM;/,
-	'SQL VACUUM run');
+	'SQL VACUUM run'
+);
 $node->issues_sql_like(
 	[ 'vacuumdb', '-f', 'postgres' ],
 	qr/statement: VACUUM \(FULL\);/,
-	'vacuumdb -f');
+	'vacuumdb -f'
+);
 $node->issues_sql_like(
 	[ 'vacuumdb', '-F', 'postgres' ],
 	qr/statement: VACUUM \(FREEZE\);/,
-	'vacuumdb -F');
+	'vacuumdb -F'
+);
 $node->issues_sql_like(
 	[ 'vacuumdb', '-zj2', 'postgres' ],
 	qr/statement: VACUUM \(ANALYZE\) pg_catalog\./,
-	'vacuumdb -zj2');
+	'vacuumdb -zj2'
+);
 $node->issues_sql_like(
 	[ 'vacuumdb', '-Z', 'postgres' ],
 	qr/statement: ANALYZE;/,
-	'vacuumdb -Z');
+	'vacuumdb -Z'
+);
 $node->command_ok([qw(vacuumdb -Z --table=pg_am dbname=template1)],
 	'vacuumdb with connection string');
 
 $node->command_fails(
 	[qw(vacuumdb -Zt pg_am;ABORT postgres)],
-	'trailing command in "-t", without COLUMNS');
+	'trailing command in "-t", without COLUMNS'
+);
 
 # Unwanted; better if it failed.
 $node->command_ok(
 	[qw(vacuumdb -Zt pg_am(amname);ABORT postgres)],
-	'trailing command in "-t", with COLUMNS');
+	'trailing command in "-t", with COLUMNS'
+);
 
 $node->safe_psql(
 	'postgres', q|
@@ -54,9 +61,11 @@ $node->safe_psql(
   CREATE TABLE funcidx (x int);
   INSERT INTO funcidx VALUES (0),(1),(2),(3);
   CREATE INDEX i0 ON funcidx ((f1(x)));
-|);
+|
+);
 $node->command_ok([qw|vacuumdb -Z --table="need""q(uot"(")x") postgres|],
 	'column list');
 $node->command_fails(
 	[qw|vacuumdb -Zt funcidx postgres|],
-	'unqualifed name via functional index');
+	'unqualifed name via functional index'
+);
diff --git a/src/bin/scripts/t/101_vacuumdb_all.pl b/src/bin/scripts/t/101_vacuumdb_all.pl
index 4321258..d44dcd6 100644
--- a/src/bin/scripts/t/101_vacuumdb_all.pl
+++ b/src/bin/scripts/t/101_vacuumdb_all.pl
@@ -11,4 +11,5 @@ $node->start;
 $node->issues_sql_like(
 	[ 'vacuumdb', '-a' ],
 	qr/statement: VACUUM.*statement: VACUUM/s,
-	'vacuum all databases');
+	'vacuum all databases'
+);
diff --git a/src/bin/scripts/t/102_vacuumdb_stages.pl b/src/bin/scripts/t/102_vacuumdb_stages.pl
index 3929441..5927e3f 100644
--- a/src/bin/scripts/t/102_vacuumdb_stages.pl
+++ b/src/bin/scripts/t/102_vacuumdb_stages.pl
@@ -16,7 +16,8 @@ $node->issues_sql_like(
                    .*statement:\ ANALYZE.*
                    .*statement:\ RESET\ default_statistics_target;
                    .*statement:\ ANALYZE/sx,
-	'analyze three times');
+	'analyze three times'
+);
 
 $node->issues_sql_like(
 	[ 'vacuumdb', '--analyze-in-stages', '--all' ],
@@ -32,4 +33,5 @@ $node->issues_sql_like(
                    .*statement:\ ANALYZE.*
                    .*statement:\ RESET\ default_statistics_target;
                    .*statement:\ ANALYZE/sx,
-	'analyze more than one database in stages');
+	'analyze more than one database in stages'
+);
diff --git a/src/bin/scripts/t/200_connstr.pl b/src/bin/scripts/t/200_connstr.pl
index a3aeee7..ab6d0d1 100644
--- a/src/bin/scripts/t/200_connstr.pl
+++ b/src/bin/scripts/t/200_connstr.pl
@@ -33,9 +33,11 @@ foreach my $dbname ($dbname1, $dbname2, $dbname3, $dbname4, 'CamelCase')
 
 $node->command_ok(
 	[qw(vacuumdb --all --echo --analyze-only)],
-	'vacuumdb --all with unusual database names');
+	'vacuumdb --all with unusual database names'
+);
 $node->command_ok([qw(reindexdb --all --echo)],
 	'reindexdb --all with unusual database names');
 $node->command_ok(
 	[qw(clusterdb --all --echo --verbose)],
-	'clusterdb --all with unusual database names');
+	'clusterdb --all with unusual database names'
+);
diff --git a/src/interfaces/ecpg/preproc/check_rules.pl b/src/interfaces/ecpg/preproc/check_rules.pl
index 6c8b004..ebd2f2d 100644
--- a/src/interfaces/ecpg/preproc/check_rules.pl
+++ b/src/interfaces/ecpg/preproc/check_rules.pl
@@ -43,7 +43,8 @@ my %replace_line = (
 	  => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
 
 	'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
-	  'PREPARE prepared_name prep_type_clause AS PreparableStmt');
+	  'PREPARE prepared_name prep_type_clause AS PreparableStmt'
+);
 
 my $block        = '';
 my $yaccmode     = 0;
diff --git a/src/interfaces/ecpg/preproc/parse.pl b/src/interfaces/ecpg/preproc/parse.pl
index 983c3a3..047d00e 100644
--- a/src/interfaces/ecpg/preproc/parse.pl
+++ b/src/interfaces/ecpg/preproc/parse.pl
@@ -38,7 +38,8 @@ my %replace_token = (
 	'FCONST' => 'ecpg_fconst',
 	'Sconst' => 'ecpg_sconst',
 	'IDENT'  => 'ecpg_ident',
-	'PARAM'  => 'ecpg_param',);
+	'PARAM'  => 'ecpg_param',
+);
 
 # or in the block
 my %replace_string = (
@@ -51,7 +52,8 @@ my %replace_string = (
 	'EQUALS_GREATER' => '=>',
 	'LESS_EQUALS'    => '<=',
 	'GREATER_EQUALS' => '>=',
-	'NOT_EQUALS'     => '<>',);
+	'NOT_EQUALS'     => '<>',
+);
 
 # specific replace_types for specific non-terminals - never include the ':'
 # ECPG-only replace_types are defined in ecpg-replace_types
@@ -67,7 +69,8 @@ my %replace_types = (
 	'ColId'              => 'ignore',
 	'type_function_name' => 'ignore',
 	'ColLabel'           => 'ignore',
-	'Sconst'             => 'ignore',);
+	'Sconst'             => 'ignore',
+);
 
 # these replace_line commands excise certain keywords from the core keyword
 # lists.  Be sure to account for these in ColLabel and related productions.
@@ -105,7 +108,8 @@ my %replace_line = (
 	  => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
 	'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
 	  'PREPARE prepared_name prep_type_clause AS PreparableStmt',
-	'var_nameColId' => 'ECPGColId',);
+	'var_nameColId' => 'ECPGColId',
+);
 
 preload_addons();
 
@@ -234,7 +238,8 @@ sub main
 		for (
 			my $fieldIndexer = 0;
 			$fieldIndexer < scalar(@arr);
-			$fieldIndexer++)
+			$fieldIndexer++
+		  )
 		{
 			if ($arr[$fieldIndexer] eq '*/' && $comment)
 			{
@@ -383,10 +388,12 @@ sub main
 				&& length($arr[$fieldIndexer])
 				&& $infield)
 			{
-				if ($arr[$fieldIndexer] ne 'Op'
+				if (
+					$arr[$fieldIndexer] ne 'Op'
 					&& (   $tokens{ $arr[$fieldIndexer] } > 0
 						|| $arr[$fieldIndexer] =~ /'.+'/)
-					|| $stmt_mode == 1)
+					|| $stmt_mode == 1
+				  )
 				{
 					my $S;
 					if (exists $replace_string{ $arr[$fieldIndexer] })
diff --git a/src/pl/plperl/plperl_opmask.pl b/src/pl/plperl/plperl_opmask.pl
index e4e64b8..503be79 100644
--- a/src/pl/plperl/plperl_opmask.pl
+++ b/src/pl/plperl/plperl_opmask.pl
@@ -43,7 +43,8 @@ my @allowed_ops = (
 	# used it. Even then it's unlikely to be seen because it's typically
 	# generated by compiler plugins that operate after PL_op_mask checks.
 	# But we err on the side of caution and disable it
-	qw[!custom],);
+	qw[!custom],
+);
 
 printf $fh "  /* ALLOWED: @allowed_ops */ \\\n";
 
diff --git a/src/pl/plperl/text2macro.pl b/src/pl/plperl/text2macro.pl
index 27c6ef7..93a2e71 100644
--- a/src/pl/plperl/text2macro.pl
+++ b/src/pl/plperl/text2macro.pl
@@ -32,7 +32,8 @@ GetOptions(
 	'prefix=s'  => \my $opt_prefix,
 	'name=s'    => \my $opt_name,
 	'strip=s'   => \my $opt_strip,
-	'selftest!' => sub { exit selftest() },) or exit 1;
+	'selftest!' => sub { exit selftest() },
+) or exit 1;
 
 die "No text files specified"
   unless @ARGV;
diff --git a/src/test/authentication/t/002_saslprep.pl b/src/test/authentication/t/002_saslprep.pl
index e09273e..bef6705 100644
--- a/src/test/authentication/t/002_saslprep.pl
+++ b/src/test/authentication/t/002_saslprep.pl
@@ -76,7 +76,8 @@ CREATE ROLE saslpreptest4a_role LOGIN PASSWORD 'a';
 CREATE ROLE saslpreptest4b_role LOGIN PASSWORD E'\\xc2\\xaa';
 CREATE ROLE saslpreptest6_role LOGIN PASSWORD E'foo\\x07bar';
 CREATE ROLE saslpreptest7_role LOGIN PASSWORD E'foo\\u0627\\u0031bar';
-");
+"
+);
 
 # Require password from now on.
 reset_pg_hba($node, 'scram-sha-256');
diff --git a/src/test/kerberos/t/001_auth.pl b/src/test/kerberos/t/001_auth.pl
index ba90231..b16cd07 100644
--- a/src/test/kerberos/t/001_auth.pl
+++ b/src/test/kerberos/t/001_auth.pl
@@ -81,12 +81,14 @@ default_realm = $realm
 [realms]
 $realm = {
     kdc = localhost:$kdc_port
-}!);
+}!
+);
 
 append_to_file(
 	$kdc_conf,
 	qq![kdcdefaults]
-!);
+!
+);
 
 # For new-enough versions of krb5, use the _listen settings rather
 # than the _ports settings so that we can bind to localhost only.
@@ -96,7 +98,8 @@ if ($krb5_version >= 1.15)
 		$kdc_conf,
 		qq!kdc_listen = localhost:$kdc_port
 kdc_tcp_listen = localhost:$kdc_port
-!);
+!
+	);
 }
 else
 {
@@ -104,7 +107,8 @@ else
 		$kdc_conf,
 		qq!kdc_ports = $kdc_port
 kdc_tcp_ports = $kdc_port
-!);
+!
+	);
 }
 append_to_file(
 	$kdc_conf,
@@ -115,7 +119,8 @@ $realm = {
     admin_keytab = FILE:$kdc_datadir/kadm5.keytab
     acl_file = $kdc_datadir/kadm5.acl
     key_stash_file = $kdc_datadir/_k5.$realm
-}!);
+}!
+);
 
 mkdir $kdc_datadir or die;
 
@@ -161,7 +166,9 @@ sub test_access
 		'SELECT 1',
 		extra_params => [
 			'-d', $node->connstr('postgres') . ' host=localhost',
-			'-U', $role ]);
+			'-U', $role
+		]
+	);
 	is($res, $expected_res, $test_name);
 }
 
diff --git a/src/test/ldap/t/001_auth.pl b/src/test/ldap/t/001_auth.pl
index 9ade9a2..f91d7ce 100644
--- a/src/test/ldap/t/001_auth.pl
+++ b/src/test/ldap/t/001_auth.pl
@@ -82,13 +82,15 @@ TLSCertificateKeyFile $slapd_certs/server.key
 
 suffix "dc=example,dc=net"
 rootdn "$ldap_rootdn"
-rootpw $ldap_rootpw});
+rootpw $ldap_rootpw}
+);
 
 # don't bother to check the server's cert (though perhaps we should)
 append_to_file(
 	$ldap_conf,
 	qq{TLS_REQCERT never
-});
+}
+);
 
 mkdir $ldap_datadir or die;
 mkdir $slapd_certs  or die;
diff --git a/src/test/modules/brin/t/01_workitems.pl b/src/test/modules/brin/t/01_workitems.pl
index 534ab63..a20eb4b 100644
--- a/src/test/modules/brin/t/01_workitems.pl
+++ b/src/test/modules/brin/t/01_workitems.pl
@@ -32,7 +32,8 @@ $node->safe_psql('postgres',
 $node->poll_query_until(
 	'postgres',
 	"select count(*) > 1 from brin_page_items(get_raw_page('brin_wi_idx', 2), 'brin_wi_idx'::regclass)",
-	't');
+	't'
+);
 
 $count = $node->safe_psql('postgres',
 	"select count(*) > 1 from brin_page_items(get_raw_page('brin_wi_idx', 2), 'brin_wi_idx'::regclass)"
diff --git a/src/test/modules/commit_ts/t/002_standby.pl b/src/test/modules/commit_ts/t/002_standby.pl
index f376b59..73d1abf 100644
--- a/src/test/modules/commit_ts/t/002_standby.pl
+++ b/src/test/modules/commit_ts/t/002_standby.pl
@@ -15,7 +15,8 @@ $master->append_conf(
 	'postgresql.conf', qq{
 	track_commit_timestamp = on
 	max_wal_senders = 5
-	});
+	}
+);
 $master->start;
 $master->backup($bkplabel);
 
@@ -60,4 +61,5 @@ is($standby_ts_stdout, '',
 like(
 	$standby_ts_stderr,
 	qr/could not get commit timestamp data/,
-	'expected error when master turned feature off');
+	'expected error when master turned feature off'
+);
diff --git a/src/test/modules/commit_ts/t/003_standby_2.pl b/src/test/modules/commit_ts/t/003_standby_2.pl
index 9165d50..5033116 100644
--- a/src/test/modules/commit_ts/t/003_standby_2.pl
+++ b/src/test/modules/commit_ts/t/003_standby_2.pl
@@ -14,7 +14,8 @@ $master->append_conf(
 	'postgresql.conf', qq{
 	track_commit_timestamp = on
 	max_wal_senders = 5
-	});
+	}
+);
 $master->start;
 $master->backup($bkplabel);
 
@@ -47,7 +48,8 @@ is($standby_ts_stdout, '', "standby does not return a value after restart");
 like(
 	$standby_ts_stderr,
 	qr/could not get commit timestamp data/,
-	'expected err msg after restart');
+	'expected err msg after restart'
+);
 
 $master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
 $master->restart;
diff --git a/src/test/modules/commit_ts/t/004_restart.pl b/src/test/modules/commit_ts/t/004_restart.pl
index daf42d3..658dbc8 100644
--- a/src/test/modules/commit_ts/t/004_restart.pl
+++ b/src/test/modules/commit_ts/t/004_restart.pl
@@ -18,7 +18,8 @@ is($ret, 3, 'getting ts of InvalidTransactionId reports error');
 like(
 	$stderr,
 	qr/cannot retrieve commit timestamp for transaction/,
-	'expected error from InvalidTransactionId');
+	'expected error from InvalidTransactionId'
+);
 
 ($ret, $stdout, $stderr) =
   $node_master->psql('postgres', qq[SELECT pg_xact_commit_timestamp('1');]);
@@ -33,10 +34,13 @@ is($stdout, '', 'timestamp of FrozenTransactionId is null');
 # Since FirstNormalTransactionId will've occurred during initdb, long before we
 # enabled commit timestamps, it'll be null since we have no cts data for it but
 # cts are enabled.
-is( $node_master->safe_psql(
-		'postgres', qq[SELECT pg_xact_commit_timestamp('3');]),
+is(
+	$node_master->safe_psql(
+		'postgres', qq[SELECT pg_xact_commit_timestamp('3');]
+	),
 	'',
-	'committs for FirstNormalTransactionId is null');
+	'committs for FirstNormalTransactionId is null'
+);
 
 $node_master->safe_psql('postgres',
 	qq[CREATE TABLE committs_test(x integer, y timestamp with time zone);]);
@@ -47,7 +51,8 @@ my $xid = $node_master->safe_psql(
 	INSERT INTO committs_test(x, y) VALUES (1, current_timestamp);
 	SELECT txid_current();
 	COMMIT;
-]);
+]
+);
 
 my $before_restart_ts = $node_master->safe_psql('postgres',
 	qq[SELECT pg_xact_commit_timestamp('$xid');]);
@@ -83,7 +88,8 @@ is($ret, 3, 'no commit timestamp from enable tx when cts disabled');
 like(
 	$stderr,
 	qr/could not get commit timestamp data/,
-	'expected error from enabled tx when committs disabled');
+	'expected error from enabled tx when committs disabled'
+);
 
 # Do a tx while cts disabled
 my $xid_disabled = $node_master->safe_psql(
@@ -92,7 +98,8 @@ my $xid_disabled = $node_master->safe_psql(
 	INSERT INTO committs_test(x, y) VALUES (2, current_timestamp);
 	SELECT txid_current();
 	COMMIT;
-]);
+]
+);
 
 # Should be inaccessible
 ($ret, $stdout, $stderr) = $node_master->psql('postgres',
@@ -101,7 +108,8 @@ is($ret, 3, 'no commit timestamp when disabled');
 like(
 	$stderr,
 	qr/could not get commit timestamp data/,
-	'expected error from disabled tx when committs disabled');
+	'expected error from disabled tx when committs disabled'
+);
 
 # Re-enable, restart and ensure we can still get the old timestamps
 $node_master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index 10716ab..dc8a8b9 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -43,12 +43,16 @@ my %pgdump_runs = (
 		dump_cmd => [
 			'pg_dump',                            '--no-sync',
 			"--file=$tempdir/binary_upgrade.sql", '--schema-only',
-			'--binary-upgrade',                   '--dbname=postgres', ], },
+			'--binary-upgrade',                   '--dbname=postgres',
+		],
+	},
 	clean => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/clean.sql",
 			'-c',      '--no-sync',
-			'--dbname=postgres', ], },
+			'--dbname=postgres',
+		],
+	},
 	clean_if_exists => {
 		dump_cmd => [
 			'pg_dump',
@@ -57,7 +61,9 @@ my %pgdump_runs = (
 			'-c',
 			'--if-exists',
 			'--encoding=UTF8',    # no-op, just tests that option is accepted
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	createdb => {
 		dump_cmd => [
 			'pg_dump',
@@ -65,7 +71,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/createdb.sql",
 			'-C',
 			'-R',                 # no-op, just for testing
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	data_only => {
 		dump_cmd => [
 			'pg_dump',
@@ -73,7 +81,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/data_only.sql",
 			'-a',
 			'-v',                 # no-op, just make sure it works
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults => {
 		dump_cmd => [ 'pg_dump', '-f', "$tempdir/defaults.sql", 'postgres', ],
 	},
@@ -81,70 +91,97 @@ my %pgdump_runs = (
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fc', '-Z6',
-			"--file=$tempdir/defaults_custom_format.dump", 'postgres', ],
+			"--file=$tempdir/defaults_custom_format.dump", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_custom_format.sql",
-			"$tempdir/defaults_custom_format.dump", ], },
+			"$tempdir/defaults_custom_format.dump",
+		],
+	},
 	defaults_dir_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fd',
-			"--file=$tempdir/defaults_dir_format", 'postgres', ],
+			"--file=$tempdir/defaults_dir_format", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_dir_format.sql",
-			"$tempdir/defaults_dir_format", ], },
+			"$tempdir/defaults_dir_format",
+		],
+	},
 	defaults_parallel => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fd', '-j2',
-			"--file=$tempdir/defaults_parallel", 'postgres', ],
+			"--file=$tempdir/defaults_parallel", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_parallel.sql",
-			"$tempdir/defaults_parallel", ], },
+			"$tempdir/defaults_parallel",
+		],
+	},
 	defaults_tar_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Ft',
-			"--file=$tempdir/defaults_tar_format.tar", 'postgres', ],
+			"--file=$tempdir/defaults_tar_format.tar", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_tar_format.sql",
-			"$tempdir/defaults_tar_format.tar", ], },
+			"$tempdir/defaults_tar_format.tar",
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall',                             '--no-sync',
-			"--file=$tempdir/pg_dumpall_globals.sql", '-g', ], },
+			"--file=$tempdir/pg_dumpall_globals.sql", '-g',
+		],
+	},
 	no_privs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_privs.sql", '-x',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_owner => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_owner.sql", '-O',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	schema_only => {
 		dump_cmd => [
 			'pg_dump', '--no-sync', "--file=$tempdir/schema_only.sql",
-			'-s', 'postgres', ], },
+			'-s', 'postgres',
+		],
+	},
 	section_pre_data => {
 		dump_cmd => [
 			'pg_dump',                              '--no-sync',
 			"--file=$tempdir/section_pre_data.sql", '--section=pre-data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_data => {
 		dump_cmd => [
 			'pg_dump',                          '--no-sync',
 			"--file=$tempdir/section_data.sql", '--section=data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_post_data => {
 		dump_cmd => [
 			'pg_dump', '--no-sync', "--file=$tempdir/section_post_data.sql",
-			'--section=post-data', 'postgres', ], },);
+			'--section=post-data', 'postgres',
+		],
+	},
+);
 
 ###############################################################
 # Definition of the tests to run.
@@ -184,7 +221,8 @@ my %full_runs = (
 	createdb        => 1,
 	defaults        => 1,
 	no_privs        => 1,
-	no_owner        => 1,);
+	no_owner        => 1,
+);
 
 my %tests = (
 	'ALTER EXTENSION test_pg_dump' => {
@@ -196,7 +234,8 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE EXTENSION test_pg_dump' => {
 		create_order => 2,
@@ -207,14 +246,17 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { binary_upgrade => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { binary_upgrade => 1, },
+	},
 
 	'CREATE ROLE regress_dump_test_role' => {
 		create_order => 1,
 		create_sql   => 'CREATE ROLE regress_dump_test_role;',
 		regexp       => qr/^CREATE ROLE regress_dump_test_role;\n/m,
-		like         => { pg_dumpall_globals => 1, }, },
+		like         => { pg_dumpall_globals => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_table_col1_seq' => {
 		regexp => qr/^
@@ -226,7 +268,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TABLE regress_pg_dump_table_added' => {
 		create_order => 7,
@@ -237,7 +280,8 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_seq' => {
 		regexp => qr/^
@@ -248,7 +292,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'SETVAL SEQUENCE regress_seq_dumpable' => {
 		create_order => 6,
@@ -259,7 +304,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			data_only    => 1,
-			section_data => 1, }, },
+			section_data => 1,
+		},
+	},
 
 	'CREATE TABLE regress_pg_dump_table' => {
 		regexp => qr/^
@@ -267,13 +314,15 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE ACCESS METHOD regress_test_am' => {
 		regexp => qr/^
 			\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'COMMENT ON EXTENSION test_pg_dump' => {
 		regexp => qr/^
@@ -283,7 +332,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'GRANT SELECT regress_pg_dump_table_added pre-ALTER EXTENSION' => {
 		create_order => 8,
@@ -292,7 +343,8 @@ my %tests = (
 		regexp => qr/^
 			\QGRANT SELECT ON TABLE public.regress_pg_dump_table_added TO regress_dump_test_role;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'REVOKE SELECT regress_pg_dump_table_added post-ALTER EXTENSION' => {
 		create_order => 10,
@@ -304,8 +356,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE regress_pg_dump_table' => {
 		regexp => qr/^
@@ -313,7 +367,8 @@ my %tests = (
 			\QGRANT SELECT ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT(col1) ON regress_pg_dump_table' => {
 		regexp => qr/^
@@ -321,7 +376,8 @@ my %tests = (
 			\QGRANT SELECT(col1) ON TABLE public.regress_pg_dump_table TO PUBLIC;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT(col2) ON regress_pg_dump_table TO regress_dump_test_role'
 	  => {
@@ -334,8 +390,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	  },
 
 	'GRANT USAGE ON regress_pg_dump_table_col1_seq TO regress_dump_test_role'
 	  => {
@@ -348,14 +406,17 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	  },
 
 	'GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role' => {
 		regexp => qr/^
 			\QGRANT USAGE ON SEQUENCE public.regress_pg_dump_seq TO regress_dump_test_role;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'REVOKE SELECT(col1) ON regress_pg_dump_table' => {
 		create_order => 3,
@@ -367,8 +428,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	# Objects included in extension part of a schema created by this extension */
 	'CREATE TABLE regress_pg_dump_schema.test_table' => {
@@ -377,7 +440,8 @@ my %tests = (
 			\n\s+\Qcol1 integer,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT ON regress_pg_dump_schema.test_table' => {
 		regexp => qr/^
@@ -385,7 +449,8 @@ my %tests = (
 			\QGRANT SELECT ON TABLE regress_pg_dump_schema.test_table TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_schema.test_seq' => {
 		regexp => qr/^
@@ -396,7 +461,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT USAGE ON regress_pg_dump_schema.test_seq' => {
 		regexp => qr/^
@@ -404,14 +470,16 @@ my %tests = (
 			\QGRANT USAGE ON SEQUENCE regress_pg_dump_schema.test_seq TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TYPE regress_pg_dump_schema.test_type' => {
 		regexp => qr/^
                     \QCREATE TYPE regress_pg_dump_schema.test_type AS (\E
                     \n\s+\Qcol1 integer\E
                     \n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT USAGE ON regress_pg_dump_schema.test_type' => {
 		regexp => qr/^
@@ -419,14 +487,16 @@ my %tests = (
 			\QGRANT ALL ON TYPE regress_pg_dump_schema.test_type TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE FUNCTION regress_pg_dump_schema.test_func' => {
 		regexp => qr/^
             \QCREATE FUNCTION regress_pg_dump_schema.test_func() RETURNS integer\E
             \n\s+\QLANGUAGE sql\E
             \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT ALL ON regress_pg_dump_schema.test_func' => {
 		regexp => qr/^
@@ -434,7 +504,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE AGGREGATE regress_pg_dump_schema.test_agg' => {
 		regexp => qr/^
@@ -442,7 +513,8 @@ my %tests = (
             \n\s+\QSFUNC = int2_sum,\E
             \n\s+\QSTYPE = bigint\E
             \n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT ALL ON regress_pg_dump_schema.test_agg' => {
 		regexp => qr/^
@@ -450,7 +522,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_agg(smallint) TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	# Objects not included in extension, part of schema created by extension
 	'CREATE TABLE regress_pg_dump_schema.external_tab' => {
@@ -464,7 +537,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, }, },);
+			section_pre_data => 1,
+		},
+	},
+);
 
 #########################################
 # Create a PG instance to test actually dumping from
@@ -537,7 +613,8 @@ foreach my $test (
 		{
 			0;
 		}
-	} keys %tests)
+	} keys %tests
+  )
 {
 	if ($tests{$test}->{create_sql})
 	{
@@ -583,16 +660,24 @@ foreach my $run (sort keys %pgdump_runs)
 		if ($tests{$test}->{like}->{$test_key}
 			&& !defined($tests{$test}->{unlike}->{$test_key}))
 		{
-			if (!ok($output_file =~ $tests{$test}->{regexp},
-					"$run: should dump $test"))
+			if (
+				!ok(
+					$output_file =~ $tests{$test}->{regexp},
+					"$run: should dump $test"
+				)
+			  )
 			{
 				diag("Review $run results in $tempdir");
 			}
 		}
 		else
 		{
-			if (!ok($output_file !~ $tests{$test}->{regexp},
-					"$run: should not dump $test"))
+			if (
+				!ok(
+					$output_file !~ $tests{$test}->{regexp},
+					"$run: should not dump $test"
+				)
+			  )
 			{
 				diag("Review $run results in $tempdir");
 			}
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 3b06e78..44822e2 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -155,7 +155,8 @@ sub new
 		_host    => $pghost,
 		_basedir => "$TestLib::tmp_check/t_${testname}_${name}_data",
 		_name    => $name,
-		_logfile => "$TestLib::log_path/${testname}_${name}.log" };
+		_logfile => "$TestLib::log_path/${testname}_${name}.log"
+	};
 
 	bless $self, $class;
 	mkdir $self->{_basedir}
@@ -598,7 +599,8 @@ sub _backup_fs
 		filterfn => sub {
 			my $src = shift;
 			return ($src ne 'log' and $src ne 'postmaster.pid');
-		});
+		}
+	);
 
 	if ($hot)
 	{
@@ -669,7 +671,8 @@ sub init_from_backup
 		'postgresql.conf',
 		qq(
 port = $port
-));
+)
+	);
 	$self->enable_streaming($root_node) if $params{has_streaming};
 	$self->enable_restoring($root_node) if $params{has_restoring};
 }
@@ -801,7 +804,8 @@ sub enable_streaming
 		'recovery.conf', qq(
 primary_conninfo='$root_connstr application_name=$name'
 standby_mode=on
-));
+)
+	);
 }
 
 # Internal routine to enable archive recovery command on a standby node
@@ -829,7 +833,8 @@ sub enable_restoring
 		'recovery.conf', qq(
 restore_command = '$copy_command'
 standby_mode = on
-));
+)
+	);
 }
 
 # Internal routine to enable archiving
@@ -858,7 +863,8 @@ sub enable_archiving
 		'postgresql.conf', qq(
 archive_mode = on
 archive_command = '$copy_command'
-));
+)
+	);
 }
 
 # Internal method
@@ -1056,7 +1062,8 @@ sub safe_psql
 		stdout        => \$stdout,
 		stderr        => \$stderr,
 		on_error_die  => 1,
-		on_error_stop => 1);
+		on_error_stop => 1
+	);
 
 	# psql can emit stderr from NOTICEs etc
 	if ($stderr ne "")
@@ -1471,7 +1478,8 @@ sub lsn
 		'flush'   => 'pg_current_wal_flush_lsn()',
 		'write'   => 'pg_current_wal_lsn()',
 		'receive' => 'pg_last_wal_receive_lsn()',
-		'replay'  => 'pg_last_wal_replay_lsn()');
+		'replay'  => 'pg_last_wal_replay_lsn()'
+	);
 
 	$mode = '<undef>' if !defined($mode);
 	croak "unknown mode for 'lsn': '$mode', valid modes are "
@@ -1658,11 +1666,13 @@ sub slot
 	my @columns = (
 		'plugin', 'slot_type',  'datoid', 'database',
 		'active', 'active_pid', 'xmin',   'catalog_xmin',
-		'restart_lsn');
+		'restart_lsn'
+	);
 	return $self->query_hash(
 		'postgres',
 		"SELECT __COLUMNS__ FROM pg_catalog.pg_replication_slots WHERE slot_name = '$slot_name'",
-		@columns);
+		@columns
+	);
 }
 
 =pod
@@ -1697,7 +1707,8 @@ sub pg_recvlogical_upto
 
 	my @cmd = (
 		'pg_recvlogical', '-S', $slot_name, '--dbname',
-		$self->connstr($dbname));
+		$self->connstr($dbname)
+	);
 	push @cmd, '--endpos', $endpos;
 	push @cmd, '-f', '-', '--no-loop', '--start';
 
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 355ef5f..3410fea 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -153,7 +153,8 @@ sub tempdir
 	return File::Temp::tempdir(
 		$prefix . '_XXXX',
 		DIR     => $tmp_check,
-		CLEANUP => 1);
+		CLEANUP => 1
+	);
 }
 
 sub tempdir_short
@@ -256,7 +257,8 @@ sub check_mode_recursive
 	my $result = 1;
 
 	find(
-		{   follow_fast => 1,
+		{
+			follow_fast => 1,
 			wanted      => sub {
 				my $file_stat = stat($File::Find::name);
 
@@ -282,7 +284,8 @@ sub check_mode_recursive
 						print(
 							*STDERR,
 							sprintf("$File::Find::name mode must be %04o\n",
-								$expected_file_mode));
+								$expected_file_mode)
+						);
 
 						$result = 0;
 						return;
@@ -297,7 +300,8 @@ sub check_mode_recursive
 						print(
 							*STDERR,
 							sprintf("$File::Find::name mode must be %04o\n",
-								$expected_dir_mode));
+								$expected_dir_mode)
+						);
 
 						$result = 0;
 						return;
@@ -311,7 +315,8 @@ sub check_mode_recursive
 				}
 			}
 		},
-		$dir);
+		$dir
+	);
 
 	return $result;
 }
@@ -322,7 +327,8 @@ sub chmod_recursive
 	my ($dir, $dir_mode, $file_mode) = @_;
 
 	find(
-		{   follow_fast => 1,
+		{
+			follow_fast => 1,
 			wanted      => sub {
 				my $file_stat = stat($File::Find::name);
 
@@ -335,7 +341,8 @@ sub chmod_recursive
 				}
 			}
 		},
-		$dir);
+		$dir
+	);
 }
 
 # Check presence of a given regexp within pg_config.h for the installation
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index a29a6c7..e3a96da 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -93,7 +93,8 @@ sub test_target_session_attrs
 	my ($ret, $stdout, $stderr) =
 	  $node1->psql('postgres', 'SHOW port;',
 		extra_params => [ '-d', $connstr ]);
-	is( $status == $ret && $stdout eq $target_node->port,
+	is(
+		$status == $ret && $stdout eq $target_node->port,
 		1,
 		"connect to node $target_name if mode \"$mode\" and $node1_name,$node2_name listed"
 	);
@@ -124,22 +125,28 @@ note "switching to physical replication slot";
 my ($slotname_1, $slotname_2) = ('standby_1', 'standby_2');
 $node_master->append_conf('postgresql.conf', "max_replication_slots = 4");
 $node_master->restart;
-is( $node_master->psql(
+is(
+	$node_master->psql(
 		'postgres',
-		qq[SELECT pg_create_physical_replication_slot('$slotname_1');]),
+		qq[SELECT pg_create_physical_replication_slot('$slotname_1');]
+	),
 	0,
-	'physical slot created on master');
+	'physical slot created on master'
+);
 $node_standby_1->append_conf('recovery.conf',
 	"primary_slot_name = $slotname_1");
 $node_standby_1->append_conf('postgresql.conf',
 	"wal_receiver_status_interval = 1");
 $node_standby_1->append_conf('postgresql.conf', "max_replication_slots = 4");
 $node_standby_1->restart;
-is( $node_standby_1->psql(
+is(
+	$node_standby_1->psql(
 		'postgres',
-		qq[SELECT pg_create_physical_replication_slot('$slotname_2');]),
+		qq[SELECT pg_create_physical_replication_slot('$slotname_2');]
+	),
 	0,
-	'physical slot created on intermediate replica');
+	'physical slot created on intermediate replica'
+);
 $node_standby_2->append_conf('recovery.conf',
 	"primary_slot_name = $slotname_2");
 $node_standby_2->append_conf('postgresql.conf',
@@ -157,7 +164,8 @@ sub get_slot_xmins
 		SELECT $check_expr
 		FROM pg_catalog.pg_replication_slots
 		WHERE slot_name = '$slotname';
-	]) or die "Timed out waiting for slot xmins to advance";
+	]
+	) or die "Timed out waiting for slot xmins to advance";
 
 	my $slotinfo = $node->slot($slotname);
 	return ($slotinfo->{'xmin'}, $slotinfo->{'catalog_xmin'});
@@ -237,7 +245,8 @@ begin
     end;
   end loop;
 end$$;
-});
+}
+);
 
 $node_master->safe_psql('postgres', 'VACUUM;');
 $node_master->safe_psql('postgres', 'CHECKPOINT;');
diff --git a/src/test/recovery/t/002_archiving.pl b/src/test/recovery/t/002_archiving.pl
index e1bd3c9..02527a5 100644
--- a/src/test/recovery/t/002_archiving.pl
+++ b/src/test/recovery/t/002_archiving.pl
@@ -10,7 +10,8 @@ use File::Copy;
 my $node_master = get_new_node('master');
 $node_master->init(
 	has_archiving    => 1,
-	allows_streaming => 1);
+	allows_streaming => 1
+);
 my $backup_name = 'my_backup';
 
 # Start it
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index 824fa4d..35bcae6 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -119,25 +119,29 @@ test_recovery_standby('LSN', 'standby_5', $node_master, \@recovery_params,
 @recovery_params = (
 	"recovery_target_name = '$recovery_name'",
 	"recovery_target_xid  = '$recovery_txid'",
-	"recovery_target_time = '$recovery_time'");
+	"recovery_target_time = '$recovery_time'"
+);
 test_recovery_standby('name + XID + time',
 	'standby_6', $node_master, \@recovery_params, "3000", $lsn3);
 @recovery_params = (
 	"recovery_target_time = '$recovery_time'",
 	"recovery_target_name = '$recovery_name'",
-	"recovery_target_xid  = '$recovery_txid'");
+	"recovery_target_xid  = '$recovery_txid'"
+);
 test_recovery_standby('time + name + XID',
 	'standby_7', $node_master, \@recovery_params, "2000", $lsn2);
 @recovery_params = (
 	"recovery_target_xid  = '$recovery_txid'",
 	"recovery_target_time = '$recovery_time'",
-	"recovery_target_name = '$recovery_name'");
+	"recovery_target_name = '$recovery_name'"
+);
 test_recovery_standby('XID + time + name',
 	'standby_8', $node_master, \@recovery_params, "4000", $lsn4);
 @recovery_params = (
 	"recovery_target_xid  = '$recovery_txid'",
 	"recovery_target_time = '$recovery_time'",
 	"recovery_target_name = '$recovery_name'",
-	"recovery_target_lsn = '$recovery_lsn'",);
+	"recovery_target_lsn = '$recovery_lsn'",
+);
 test_recovery_standby('XID + time + name + LSN',
 	'standby_9', $node_master, \@recovery_params, "5000", $lsn5);
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index 34ee335..a7848b4 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -49,7 +49,8 @@ $node_standby_2->append_conf(
 primary_conninfo='$connstr_1 application_name=@{[$node_standby_2->name]}'
 standby_mode=on
 recovery_target_timeline='latest'
-));
+)
+);
 $node_standby_2->restart;
 
 # Insert some data in standby 1 and check its presence in standby 2
diff --git a/src/test/recovery/t/005_replay_delay.pl b/src/test/recovery/t/005_replay_delay.pl
index 8909c45..443cc3c 100644
--- a/src/test/recovery/t/005_replay_delay.pl
+++ b/src/test/recovery/t/005_replay_delay.pl
@@ -27,7 +27,8 @@ $node_standby->init_from_backup($node_master, $backup_name,
 $node_standby->append_conf(
 	'recovery.conf', qq(
 recovery_min_apply_delay = '${delay}s'
-));
+)
+);
 $node_standby->start;
 
 # Make new content on master and check its presence in standby depending
diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl
index ff1ea0e..cd4fafa 100644
--- a/src/test/recovery/t/006_logical_decoding.pl
+++ b/src/test/recovery/t/006_logical_decoding.pl
@@ -16,7 +16,8 @@ $node_master->init(allows_streaming => 1);
 $node_master->append_conf(
 	'postgresql.conf', qq(
 wal_level = logical
-));
+)
+);
 $node_master->start;
 my $backup_name = 'master_backup';
 
@@ -74,7 +75,8 @@ print "waiting to replay $endpos\n";
 my $stdout_recv = $node_master->pg_recvlogical_upto(
 	'postgres', 'test_slot', $endpos, 10,
 	'include-xids'     => '0',
-	'skip-empty-xacts' => '1');
+	'skip-empty-xacts' => '1'
+);
 chomp($stdout_recv);
 is($stdout_recv, $expected,
 	'got same expected output from pg_recvlogical decoding session');
@@ -86,19 +88,22 @@ $node_master->poll_query_until('postgres',
 $stdout_recv = $node_master->pg_recvlogical_upto(
 	'postgres', 'test_slot', $endpos, 10,
 	'include-xids'     => '0',
-	'skip-empty-xacts' => '1');
+	'skip-empty-xacts' => '1'
+);
 chomp($stdout_recv);
 is($stdout_recv, '',
 	'pg_recvlogical acknowledged changes, nothing pending on slot');
 
 $node_master->safe_psql('postgres', 'CREATE DATABASE otherdb');
 
-is( $node_master->psql(
+is(
+	$node_master->psql(
 		'otherdb',
 		"SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;"
 	),
 	3,
-	'replaying logical slot from another database fails');
+	'replaying logical slot from another database fails'
+);
 
 $node_master->safe_psql('otherdb',
 	qq[SELECT pg_create_logical_replication_slot('otherdb_slot', 'test_decoding');]
@@ -112,8 +117,11 @@ SKIP:
 	skip "Test fails on Windows perl", 2 if $Config{osname} eq 'MSWin32';
 
 	my $pg_recvlogical = IPC::Run::start(
-		[   'pg_recvlogical', '-d', $node_master->connstr('otherdb'),
-			'-S', 'otherdb_slot', '-f', '-', '--start' ]);
+		[
+			'pg_recvlogical', '-d', $node_master->connstr('otherdb'),
+			'-S', 'otherdb_slot', '-f', '-', '--start'
+		]
+	);
 	$node_master->poll_query_until('otherdb',
 		"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NOT NULL)"
 	) or die "slot never became active";
@@ -143,15 +151,21 @@ is($node_master->safe_psql('postgres', 'SHOW wal_level'),
 	'replica', 'wal_level is replica');
 isnt($node_master->slot('test_slot')->{'catalog_xmin'},
 	'0', 'restored slot catalog_xmin is nonzero');
-is( $node_master->psql(
+is(
+	$node_master->psql(
 		'postgres',
-		qq[SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);]),
+		qq[SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);]
+	),
 	3,
-	'reading from slot with wal_level < logical fails');
-is( $node_master->psql(
-		'postgres', q[SELECT pg_drop_replication_slot('test_slot')]),
+	'reading from slot with wal_level < logical fails'
+);
+is(
+	$node_master->psql(
+		'postgres', q[SELECT pg_drop_replication_slot('test_slot')]
+	),
 	0,
-	'can drop logical slot while wal_level = replica');
+	'can drop logical slot while wal_level = replica'
+);
 is($node_master->slot('test_slot')->{'catalog_xmin'}, '', 'slot was dropped');
 
 # done with the node
diff --git a/src/test/recovery/t/007_sync_rep.pl b/src/test/recovery/t/007_sync_rep.pl
index 0ddf70b..bdf46ac 100644
--- a/src/test/recovery/t/007_sync_rep.pl
+++ b/src/test/recovery/t/007_sync_rep.pl
@@ -60,7 +60,8 @@ test_sync_state(
 standby2|2|potential
 standby3|0|async),
 	'old syntax of synchronous_standby_names',
-	'standby1,standby2');
+	'standby1,standby2'
+);
 
 # Check that all the standbys are considered as either sync or
 # potential when * is specified in synchronous_standby_names.
@@ -72,7 +73,8 @@ test_sync_state(
 standby2|1|potential
 standby3|1|potential),
 	'asterisk in synchronous_standby_names',
-	'*');
+	'*'
+);
 
 # Stop and start standbys to rearrange the order of standbys
 # in WalSnd array. Now, if standbys have the same priority,
@@ -90,7 +92,8 @@ test_sync_state(
 	$node_master, qq(standby2|2|sync
 standby3|3|sync),
 	'2 synchronous standbys',
-	'2(standby1,standby2,standby3)');
+	'2(standby1,standby2,standby3)'
+);
 
 # Start standby1
 $node_standby_1->start;
@@ -110,7 +113,8 @@ test_sync_state(
 standby2|2|sync
 standby3|3|potential
 standby4|0|async),
-	'2 sync, 1 potential, and 1 async');
+	'2 sync, 1 potential, and 1 async'
+);
 
 # Check that sync_state of each standby is determined correctly
 # when num_sync exceeds the number of names of potential sync standbys
@@ -121,7 +125,8 @@ standby2|4|sync
 standby3|3|sync
 standby4|1|sync),
 	'num_sync exceeds the num of potential sync standbys',
-	'6(standby4,standby0,standby3,standby2)');
+	'6(standby4,standby0,standby3,standby2)'
+);
 
 # The setting that * comes before another standby name is acceptable
 # but does not make sense in most cases. Check that sync_state is
@@ -133,7 +138,8 @@ standby2|2|sync
 standby3|2|potential
 standby4|2|potential),
 	'asterisk comes before another standby name',
-	'2(standby1,*,standby2)');
+	'2(standby1,*,standby2)'
+);
 
 # Check that the setting of '2(*)' chooses standby2 and standby3 that are stored
 # earlier in WalSnd array as sync standbys.
@@ -143,7 +149,8 @@ standby2|1|sync
 standby3|1|sync
 standby4|1|potential),
 	'multiple standbys having the same priority are chosen as sync',
-	'2(*)');
+	'2(*)'
+);
 
 # Stop Standby3 which is considered in 'sync' state.
 $node_standby_3->stop;
@@ -154,7 +161,8 @@ test_sync_state(
 	$node_master, qq(standby1|1|sync
 standby2|1|sync
 standby4|1|potential),
-	'potential standby found earlier in array is promoted to sync');
+	'potential standby found earlier in array is promoted to sync'
+);
 
 # Check that standby1 and standby2 are chosen as sync standbys
 # based on their priorities.
@@ -163,7 +171,8 @@ test_sync_state(
 standby2|2|sync
 standby4|0|async),
 	'priority-based sync replication specified by FIRST keyword',
-	'FIRST 2(standby1, standby2)');
+	'FIRST 2(standby1, standby2)'
+);
 
 # Check that all the listed standbys are considered as candidates
 # for sync standbys in a quorum-based sync replication.
@@ -172,7 +181,8 @@ test_sync_state(
 standby2|1|quorum
 standby4|0|async),
 	'2 quorum and 1 async',
-	'ANY 2(standby1, standby2)');
+	'ANY 2(standby1, standby2)'
+);
 
 # Start Standby3 which will be considered in 'quorum' state.
 $node_standby_3->start;
@@ -185,4 +195,5 @@ standby2|1|quorum
 standby3|1|quorum
 standby4|1|quorum),
 	'all standbys are considered as candidates for quorum sync standbys',
-	'ANY 2(*)');
+	'ANY 2(*)'
+);
diff --git a/src/test/recovery/t/008_fsm_truncation.pl b/src/test/recovery/t/008_fsm_truncation.pl
index ddab464..7511e64 100644
--- a/src/test/recovery/t/008_fsm_truncation.pl
+++ b/src/test/recovery/t/008_fsm_truncation.pl
@@ -18,7 +18,8 @@ fsync = on
 wal_log_hints = on
 max_prepared_transactions = 5
 autovacuum = off
-});
+}
+);
 
 # Create a master node and its standby, initializing both with some data
 # at the same time.
@@ -36,7 +37,8 @@ create table testtab (a int, b char(100));
 insert into testtab select generate_series(1,1000), 'foo';
 insert into testtab select generate_series(1,1000), 'foo';
 delete from testtab where ctid > '(8,0)';
-});
+}
+);
 
 # Take a lock on the table to prevent following vacuum from truncating it
 $node_master->psql(
@@ -44,7 +46,8 @@ $node_master->psql(
 begin;
 lock table testtab in row share mode;
 prepare transaction 'p1';
-});
+}
+);
 
 # Vacuum, update FSM without truncation
 $node_master->psql('postgres', 'vacuum verbose testtab');
@@ -59,7 +62,8 @@ $node_master->psql(
 insert into testtab select generate_series(1,1000), 'foo';
 delete from testtab where ctid > '(8,0)';
 vacuum verbose testtab;
-});
+}
+);
 
 # Ensure all buffers are now clean on the standby
 $node_standby->psql('postgres', 'checkpoint');
@@ -69,7 +73,8 @@ $node_master->psql(
 	'postgres', qq{
 rollback prepared 'p1';
 vacuum verbose testtab;
-});
+}
+);
 
 $node_master->psql('postgres', 'checkpoint');
 my $until_lsn =
@@ -89,8 +94,11 @@ $node_standby->psql('postgres', 'checkpoint');
 $node_standby->restart;
 
 # Insert should work on standby
-is( $node_standby->psql(
+is(
+	$node_standby->psql(
 		'postgres',
-		qq{insert into testtab select generate_series(1,1000), 'foo';}),
+		qq{insert into testtab select generate_series(1,1000), 'foo';}
+	),
 	0,
-	'INSERT succeeds with truncated relation FSM');
+	'INSERT succeeds with truncated relation FSM'
+);
diff --git a/src/test/recovery/t/009_twophase.pl b/src/test/recovery/t/009_twophase.pl
index 93c22d1..3b4b2c6 100644
--- a/src/test/recovery/t/009_twophase.pl
+++ b/src/test/recovery/t/009_twophase.pl
@@ -17,7 +17,8 @@ sub configure_and_reload
 	$node->append_conf(
 		'postgresql.conf', qq(
 		$parameter
-	));
+	)
+	);
 	$node->psql('postgres', "SELECT pg_reload_conf()", stdout => \$psql_out);
 	is($psql_out, 't', "reload node $name with $parameter");
 }
@@ -31,7 +32,8 @@ $node_london->append_conf(
 	'postgresql.conf', qq(
 	max_prepared_transactions = 10
 	log_checkpoints = true
-));
+)
+);
 $node_london->start;
 $node_london->backup('london_backup');
 
@@ -71,7 +73,8 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (3, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (4, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_2';");
+	PREPARE TRANSACTION 'xact_009_2';"
+);
 $cur_master->stop;
 $cur_master->start;
 
@@ -99,7 +102,8 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (7, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (8, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_4';");
+	PREPARE TRANSACTION 'xact_009_4';"
+);
 $cur_master->teardown_node;
 $cur_master->start;
 
@@ -126,7 +130,8 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (11, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (12, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_5';");
+	PREPARE TRANSACTION 'xact_009_5';"
+);
 $cur_master->teardown_node;
 $cur_master->start;
 
@@ -145,7 +150,8 @@ $cur_master->psql(
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (14, 'issued to ${cur_master_name}');
 	PREPARE TRANSACTION 'xact_009_6';
-	COMMIT PREPARED 'xact_009_6';");
+	COMMIT PREPARED 'xact_009_6';"
+);
 $cur_master->teardown_node;
 $cur_master->start;
 $psql_rc = $cur_master->psql(
@@ -156,7 +162,8 @@ $psql_rc = $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (16, 'issued to ${cur_master_name}');
 	-- This prepare can fail due to conflicting GID or locks conflicts if
 	-- replay did not fully cleanup its state on previous commit.
-	PREPARE TRANSACTION 'xact_009_7';");
+	PREPARE TRANSACTION 'xact_009_7';"
+);
 is($psql_rc, '0', "Cleanup of shared memory state for 2PC commit");
 
 $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_7'");
@@ -172,11 +179,13 @@ $cur_master->psql(
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (18, 'issued to ${cur_master_name}');
 	PREPARE TRANSACTION 'xact_009_8';
-	COMMIT PREPARED 'xact_009_8';");
+	COMMIT PREPARED 'xact_009_8';"
+);
 $cur_standby->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '0',
 	"Cleanup of shared memory state on running standby without checkpoint");
 
@@ -191,13 +200,15 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (19, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (20, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_9';");
+	PREPARE TRANSACTION 'xact_009_9';"
+);
 $cur_standby->psql('postgres', "CHECKPOINT");
 $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_9'");
 $cur_standby->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '0',
 	"Cleanup of shared memory state on running standby after checkpoint");
 
@@ -211,7 +222,8 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (21, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (22, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_10';");
+	PREPARE TRANSACTION 'xact_009_10';"
+);
 $cur_master->teardown_node;
 $cur_standby->promote;
 
@@ -231,7 +243,8 @@ $cur_standby->enable_streaming($cur_master);
 $cur_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $cur_standby->start;
 
 ###############################################################################
@@ -247,7 +260,8 @@ $cur_master->psql(
 	INSERT INTO t_009_tbl VALUES (23, 'issued to ${cur_master_name}');
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (24, 'issued to ${cur_master_name}');
-	PREPARE TRANSACTION 'xact_009_11';");
+	PREPARE TRANSACTION 'xact_009_11';"
+);
 $cur_master->stop;
 $cur_standby->restart;
 $cur_standby->promote;
@@ -260,7 +274,8 @@ $cur_master_name = $cur_master->name;
 $cur_master->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '1',
 	"Restore prepared transactions from files with master down");
 
@@ -269,7 +284,8 @@ $cur_standby->enable_streaming($cur_master);
 $cur_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $cur_standby->start;
 
 $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_11'");
@@ -286,7 +302,8 @@ $cur_master->psql(
 	SAVEPOINT s1;
 	INSERT INTO t_009_tbl VALUES (26, 'issued to ${cur_master_name}');
 	PREPARE TRANSACTION 'xact_009_12';
-	");
+	"
+);
 $cur_master->stop;
 $cur_standby->teardown_node;
 $cur_standby->start;
@@ -300,7 +317,8 @@ $cur_master_name = $cur_master->name;
 $cur_master->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '1',
 	"Restore prepared transactions from records with master down");
 
@@ -309,7 +327,8 @@ $cur_standby->enable_streaming($cur_master);
 $cur_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $cur_standby->start;
 
 $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_12'");
@@ -329,7 +348,8 @@ $cur_master->psql(
 	-- checkpoint will issue XLOG_STANDBY_LOCK that can conflict with lock
 	-- held by 'create table' statement
 	CHECKPOINT;
-	COMMIT PREPARED 'xact_009_13';");
+	COMMIT PREPARED 'xact_009_13';"
+);
 
 # Ensure that last transaction is replayed on standby.
 my $cur_master_lsn =
@@ -342,7 +362,8 @@ $cur_standby->poll_query_until('postgres', $caughtup_query)
 $cur_standby->psql(
 	'postgres',
 	"SELECT count(*) FROM t_009_tbl2",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '1', "Replay prepared transaction with DDL");
 
 ###############################################################################
@@ -352,14 +373,17 @@ is($psql_out, '1', "Replay prepared transaction with DDL");
 $cur_master->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '0', "No uncommitted prepared transactions on master");
 
 $cur_master->psql(
 	'postgres',
 	"SELECT * FROM t_009_tbl ORDER BY id",
-	stdout => \$psql_out);
-is( $psql_out, qq{1|issued to london
+	stdout => \$psql_out
+);
+is(
+	$psql_out, qq{1|issued to london
 2|issued to london
 5|issued to london
 6|issued to london
@@ -381,27 +405,34 @@ is( $psql_out, qq{1|issued to london
 24|issued to paris
 25|issued to london
 26|issued to london},
-	"Check expected t_009_tbl data on master");
+	"Check expected t_009_tbl data on master"
+);
 
 $cur_master->psql(
 	'postgres',
 	"SELECT * FROM t_009_tbl2",
-	stdout => \$psql_out);
-is( $psql_out,
+	stdout => \$psql_out
+);
+is(
+	$psql_out,
 	qq{27|issued to paris},
-	"Check expected t_009_tbl2 data on master");
+	"Check expected t_009_tbl2 data on master"
+);
 
 $cur_standby->psql(
 	'postgres',
 	"SELECT count(*) FROM pg_prepared_xacts",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '0', "No uncommitted prepared transactions on standby");
 
 $cur_standby->psql(
 	'postgres',
 	"SELECT * FROM t_009_tbl ORDER BY id",
-	stdout => \$psql_out);
-is( $psql_out, qq{1|issued to london
+	stdout => \$psql_out
+);
+is(
+	$psql_out, qq{1|issued to london
 2|issued to london
 5|issued to london
 6|issued to london
@@ -423,12 +454,16 @@ is( $psql_out, qq{1|issued to london
 24|issued to paris
 25|issued to london
 26|issued to london},
-	"Check expected t_009_tbl data on standby");
+	"Check expected t_009_tbl data on standby"
+);
 
 $cur_standby->psql(
 	'postgres',
 	"SELECT * FROM t_009_tbl2",
-	stdout => \$psql_out);
-is( $psql_out,
+	stdout => \$psql_out
+);
+is(
+	$psql_out,
 	qq{27|issued to paris},
-	"Check expected t_009_tbl2 data on standby");
+	"Check expected t_009_tbl2 data on standby"
+);
diff --git a/src/test/recovery/t/010_logical_decoding_timelines.pl b/src/test/recovery/t/010_logical_decoding_timelines.pl
index a76eea8..4503b6c 100644
--- a/src/test/recovery/t/010_logical_decoding_timelines.pl
+++ b/src/test/recovery/t/010_logical_decoding_timelines.pl
@@ -41,7 +41,8 @@ max_wal_senders = 2
 log_min_messages = 'debug2'
 hot_standby_feedback = on
 wal_receiver_status_interval = 1
-]);
+]
+);
 $node_master->dump_info;
 $node_master->start;
 
@@ -75,7 +76,8 @@ my $node_replica = get_new_node('replica');
 $node_replica->init_from_backup(
 	$node_master, $backup_name,
 	has_streaming => 1,
-	has_restoring => 1);
+	has_restoring => 1
+);
 $node_replica->append_conf('recovery.conf',
 	q[primary_slot_name = 'phys_slot']);
 
@@ -87,10 +89,13 @@ is($node_master->psql('postgres', 'DROP DATABASE dropme'),
 	0, 'dropped DB with logical slot OK on master');
 $node_master->wait_for_catchup($node_replica, 'replay',
 	$node_master->lsn('insert'));
-is( $node_replica->safe_psql(
-		'postgres', q[SELECT 1 FROM pg_database WHERE datname = 'dropme']),
+is(
+	$node_replica->safe_psql(
+		'postgres', q[SELECT 1 FROM pg_database WHERE datname = 'dropme']
+	),
 	'',
-	'dropped DB dropme on standby');
+	'dropped DB dropme on standby'
+);
 is($node_master->slot('dropme_slot')->{'slot_name'},
 	undef, 'logical slot was actually dropped on standby');
 
@@ -117,7 +122,8 @@ $node_master->poll_query_until(
 	SELECT catalog_xmin IS NOT NULL
 	FROM pg_replication_slots
 	WHERE slot_name = 'phys_slot'
-	]) or die "slot's catalog_xmin never became set";
+	]
+) or die "slot's catalog_xmin never became set";
 
 my $phys_slot = $node_master->slot('phys_slot');
 isnt($phys_slot->{'xmin'}, '', 'xmin assigned on physical slot of master');
@@ -128,7 +134,8 @@ isnt($phys_slot->{'catalog_xmin'},
 cmp_ok(
 	$phys_slot->{'xmin'}, '>=',
 	$phys_slot->{'catalog_xmin'},
-	'xmin on physical slot must not be lower than catalog_xmin');
+	'xmin on physical slot must not be lower than catalog_xmin'
+);
 
 $node_master->safe_psql('postgres', 'CHECKPOINT');
 
@@ -148,13 +155,15 @@ is($ret, 3, 'replaying from after_basebackup slot fails');
 like(
 	$stderr,
 	qr/replication slot "after_basebackup" does not exist/,
-	'after_basebackup slot missing');
+	'after_basebackup slot missing'
+);
 
 # Should be able to read from slot created before base backup
 ($ret, $stdout, $stderr) = $node_replica->psql(
 	'postgres',
 	"SELECT data FROM pg_logical_slot_peek_changes('before_basebackup', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');",
-	timeout => 30);
+	timeout => 30
+);
 is($ret, 0, 'replay from slot before_basebackup succeeds');
 
 my $final_expected_output_bb = q(BEGIN
@@ -185,7 +194,8 @@ $stdout = $node_replica->pg_recvlogical_upto(
 	'postgres', 'before_basebackup',
 	$endpos,    30,
 	'include-xids'     => '0',
-	'skip-empty-xacts' => '1');
+	'skip-empty-xacts' => '1'
+);
 
 # walsender likes to add a newline
 chomp($stdout);
diff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl
index 6fe4786..2d90ef7 100644
--- a/src/test/recovery/t/011_crash_recovery.pl
+++ b/src/test/recovery/t/011_crash_recovery.pl
@@ -29,14 +29,17 @@ my ($stdin, $stdout, $stderr) = ('', '', '');
 # an xact to be in-progress when we crash and we need to know
 # its xid.
 my $tx = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$stdin,
 	'>',
 	\$stdout,
 	'2>',
-	\$stderr);
+	\$stderr
+);
 $stdin .= q[
 BEGIN;
 CREATE TABLE mine(x integer);
diff --git a/src/test/recovery/t/012_subtransactions.pl b/src/test/recovery/t/012_subtransactions.pl
index efc23d0..6bd3a5b 100644
--- a/src/test/recovery/t/012_subtransactions.pl
+++ b/src/test/recovery/t/012_subtransactions.pl
@@ -13,7 +13,8 @@ $node_master->append_conf(
 	'postgresql.conf', qq(
 	max_prepared_transactions = 10
 	log_checkpoints = true
-));
+)
+);
 $node_master->start;
 $node_master->backup('master_backup');
 $node_master->psql('postgres', "CREATE TABLE t_012_tbl (id int)");
@@ -28,7 +29,8 @@ $node_standby->start;
 $node_master->append_conf(
 	'postgresql.conf', qq(
 	synchronous_standby_names = '*'
-));
+)
+);
 $node_master->psql('postgres', "SELECT pg_reload_conf()");
 
 my $psql_out = '';
@@ -55,7 +57,8 @@ $node_master->psql(
 	SAVEPOINT s5;
 	INSERT INTO t_012_tbl VALUES (43);
 	PREPARE TRANSACTION 'xact_012_1';
-	CHECKPOINT;");
+	CHECKPOINT;"
+);
 
 $node_master->stop;
 $node_master->start;
@@ -66,12 +69,14 @@ $node_master->psql(
 	BEGIN;
 	INSERT INTO t_012_tbl VALUES (142);
 	ROLLBACK;
-	COMMIT PREPARED 'xact_012_1';");
+	COMMIT PREPARED 'xact_012_1';"
+);
 
 $node_master->psql(
 	'postgres',
 	"SELECT count(*) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '6', "Check nextXid handling for prepared subtransactions");
 
 ###############################################################################
@@ -94,18 +99,21 @@ $node_master->psql(
         PERFORM hs_subxids(n - 1);
         RETURN;
     EXCEPTION WHEN raise_exception THEN NULL; END;
-    \$\$;");
+    \$\$;"
+);
 $node_master->psql(
 	'postgres', "
 	BEGIN;
 	SELECT hs_subxids(127);
-	COMMIT;");
+	COMMIT;"
+);
 $node_master->wait_for_catchup($node_standby, 'replay',
 	$node_master->lsn('insert'));
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '8128', "Visible");
 $node_master->stop;
 $node_standby->promote;
@@ -113,7 +121,8 @@ $node_standby->promote;
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '8128', "Visible");
 
 # restore state
@@ -122,12 +131,14 @@ $node_standby->enable_streaming($node_master);
 $node_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $node_standby->start;
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '8128', "Visible");
 
 $node_master->psql('postgres', "DELETE FROM t_012_tbl");
@@ -145,18 +156,21 @@ $node_master->psql(
         PERFORM hs_subxids(n - 1);
         RETURN;
     EXCEPTION WHEN raise_exception THEN NULL; END;
-    \$\$;");
+    \$\$;"
+);
 $node_master->psql(
 	'postgres', "
 	BEGIN;
 	SELECT hs_subxids(127);
-	PREPARE TRANSACTION 'xact_012_1';");
+	PREPARE TRANSACTION 'xact_012_1';"
+);
 $node_master->wait_for_catchup($node_standby, 'replay',
 	$node_master->lsn('insert'));
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '-1', "Not visible");
 $node_master->stop;
 $node_standby->promote;
@@ -164,7 +178,8 @@ $node_standby->promote;
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '-1', "Not visible");
 
 # restore state
@@ -173,7 +188,8 @@ $node_standby->enable_streaming($node_master);
 $node_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $node_standby->start;
 $psql_rc = $node_master->psql('postgres', "COMMIT PREPARED 'xact_012_1'");
 is($psql_rc, '0',
@@ -183,7 +199,8 @@ is($psql_rc, '0',
 $node_master->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '8128', "Visible");
 
 $node_master->psql('postgres', "DELETE FROM t_012_tbl");
@@ -191,13 +208,15 @@ $node_master->psql(
 	'postgres', "
 	BEGIN;
 	SELECT hs_subxids(201);
-	PREPARE TRANSACTION 'xact_012_1';");
+	PREPARE TRANSACTION 'xact_012_1';"
+);
 $node_master->wait_for_catchup($node_standby, 'replay',
 	$node_master->lsn('insert'));
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '-1', "Not visible");
 $node_master->stop;
 $node_standby->promote;
@@ -205,7 +224,8 @@ $node_standby->promote;
 $node_standby->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '-1', "Not visible");
 
 # restore state
@@ -214,7 +234,8 @@ $node_standby->enable_streaming($node_master);
 $node_standby->append_conf(
 	'recovery.conf', qq(
 recovery_target_timeline='latest'
-));
+)
+);
 $node_standby->start;
 $psql_rc = $node_master->psql('postgres', "ROLLBACK PREPARED 'xact_012_1'");
 is($psql_rc, '0',
@@ -224,5 +245,6 @@ is($psql_rc, '0',
 $node_master->psql(
 	'postgres',
 	"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
-	stdout => \$psql_out);
+	stdout => \$psql_out
+);
 is($psql_out, '-1', "Not visible");
diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl
index d8ef22f..5648d26 100644
--- a/src/test/recovery/t/013_crash_restart.pl
+++ b/src/test/recovery/t/013_crash_restart.pl
@@ -34,33 +34,40 @@ $node->safe_psql(
 	'postgres',
 	q[ALTER SYSTEM SET restart_after_crash = 1;
 				   ALTER SYSTEM SET log_connections = 1;
-				   SELECT pg_reload_conf();]);
+				   SELECT pg_reload_conf();]
+);
 
 # Run psql, keeping session alive, so we have an alive backend to kill.
 my ($killme_stdin, $killme_stdout, $killme_stderr) = ('', '', '');
 my $killme = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$killme_stdin,
 	'>',
 	\$killme_stdout,
 	'2>',
 	\$killme_stderr,
-	$psql_timeout);
+	$psql_timeout
+);
 
 # Need a second psql to check if crash-restart happened.
 my ($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', '');
 my $monitor = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$monitor_stdin,
 	'>',
 	\$monitor_stdout,
 	'2>',
 	\$monitor_stderr,
-	$psql_timeout);
+	$psql_timeout
+);
 
 #create table, insert row that should survive
 $killme_stdin .= q[
@@ -108,12 +115,14 @@ is($ret, 0, "killed process with SIGQUIT");
 $killme_stdin .= q[
 SELECT 1;
 ];
-ok( pump_until(
+ok(
+	pump_until(
 		$killme,
 		\$killme_stderr,
 		qr/WARNING:  terminating connection because of crash of another server process|server closed the connection unexpectedly/m
 	),
-	"psql query died successfully after SIGQUIT");
+	"psql query died successfully after SIGQUIT"
+);
 $killme_stderr = '';
 $killme_stdout = '';
 $killme->finish;
@@ -121,21 +130,26 @@ $killme->finish;
 # Wait till server restarts - we should get the WARNING here, but
 # sometimes the server is unable to send that, if interrupted while
 # sending.
-ok( pump_until(
+ok(
+	pump_until(
 		$monitor,
 		\$monitor_stderr,
 		qr/WARNING:  terminating connection because of crash of another server process|server closed the connection unexpectedly/m
 	),
-	"psql monitor died successfully after SIGQUIT");
+	"psql monitor died successfully after SIGQUIT"
+);
 $monitor->finish;
 
 # Wait till server restarts
-is( $node->poll_query_until(
+is(
+	$node->poll_query_until(
 		'postgres',
 		'SELECT $$restarted after sigquit$$;',
-		'restarted after sigquit'),
+		'restarted after sigquit'
+	),
 	"1",
-	"reconnected after SIGQUIT");
+	"reconnected after SIGQUIT"
+);
 
 
 # restart psql processes, now that the crash cycle finished
@@ -192,21 +206,26 @@ is($ret, 0, "killed process with KILL");
 $killme_stdin .= q[
 SELECT 1;
 ];
-ok( pump_until(
+ok(
+	pump_until(
 		$killme, \$killme_stderr,
-		qr/server closed the connection unexpectedly/m),
-	"psql query died successfully after SIGKILL");
+		qr/server closed the connection unexpectedly/m
+	),
+	"psql query died successfully after SIGKILL"
+);
 $killme->finish;
 
 # Wait till server restarts - we should get the WARNING here, but
 # sometimes the server is unable to send that, if interrupted while
 # sending.
-ok( pump_until(
+ok(
+	pump_until(
 		$monitor,
 		\$monitor_stderr,
 		qr/WARNING:  terminating connection because of crash of another server process|server closed the connection unexpectedly/m
 	),
-	"psql monitor died successfully after SIGKILL");
+	"psql monitor died successfully after SIGKILL"
+);
 $monitor->finish;
 
 # Wait till server restarts
@@ -214,30 +233,38 @@ is($node->poll_query_until('postgres', 'SELECT 1', '1'),
 	"1", "reconnected after SIGKILL");
 
 # Make sure the committed rows survived, in-progress ones not
-is( $node->safe_psql('postgres', 'SELECT * FROM alive'),
+is(
+	$node->safe_psql('postgres', 'SELECT * FROM alive'),
 	"committed-before-sigquit\ncommitted-before-sigkill",
-	'data survived');
+	'data survived'
+);
 
-is( $node->safe_psql(
+is(
+	$node->safe_psql(
 		'postgres',
 		'INSERT INTO alive VALUES($$before-orderly-restart$$) RETURNING status'
 	),
 	'before-orderly-restart',
-	'can still write after crash restart');
+	'can still write after crash restart'
+);
 
 # Just to be sure, check that an orderly restart now still works
 $node->restart();
 
-is( $node->safe_psql('postgres', 'SELECT * FROM alive'),
+is(
+	$node->safe_psql('postgres', 'SELECT * FROM alive'),
 	"committed-before-sigquit\ncommitted-before-sigkill\nbefore-orderly-restart",
-	'data survived');
+	'data survived'
+);
 
-is( $node->safe_psql(
+is(
+	$node->safe_psql(
 		'postgres',
 		'INSERT INTO alive VALUES($$after-orderly-restart$$) RETURNING status'
 	),
 	'after-orderly-restart',
-	'can still write after orderly restart');
+	'can still write after orderly restart'
+);
 
 $node->stop();
 
diff --git a/src/test/recovery/t/014_unlogged_reinit.pl b/src/test/recovery/t/014_unlogged_reinit.pl
index 103c0a2..8fe8ea6 100644
--- a/src/test/recovery/t/014_unlogged_reinit.pl
+++ b/src/test/recovery/t/014_unlogged_reinit.pl
@@ -67,15 +67,23 @@ ok(-f "$pgdata/${baseUnloggedPath}_init", 'init fork in base still exists');
 ok(-f "$pgdata/$baseUnloggedPath", 'main fork in base recreated at startup');
 ok(!-f "$pgdata/${baseUnloggedPath}_vm",
 	'vm fork in base removed at startup');
-ok( !-f "$pgdata/${baseUnloggedPath}_fsm",
-	'fsm fork in base removed at startup');
+ok(
+	!-f "$pgdata/${baseUnloggedPath}_fsm",
+	'fsm fork in base removed at startup'
+);
 
 # check unlogged table in tablespace
-ok( -f "$pgdata/${ts1UnloggedPath}_init",
-	'init fork still exists in tablespace');
+ok(
+	-f "$pgdata/${ts1UnloggedPath}_init",
+	'init fork still exists in tablespace'
+);
 ok(-f "$pgdata/$ts1UnloggedPath",
 	'main fork in tablespace recreated at startup');
-ok( !-f "$pgdata/${ts1UnloggedPath}_vm",
-	'vm fork in tablespace removed at startup');
-ok( !-f "$pgdata/${ts1UnloggedPath}_fsm",
-	'fsm fork in tablespace removed at startup');
+ok(
+	!-f "$pgdata/${ts1UnloggedPath}_vm",
+	'vm fork in tablespace removed at startup'
+);
+ok(
+	!-f "$pgdata/${ts1UnloggedPath}_fsm",
+	'fsm fork in tablespace removed at startup'
+);
diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm
index 5ca9e0d..ced279c 100644
--- a/src/test/ssl/ServerSetup.pm
+++ b/src/test/ssl/ServerSetup.pm
@@ -43,7 +43,8 @@ sub test_connect_ok
 	my $cmd = [
 		'psql', '-X', '-A', '-t', '-c',
 		"SELECT \$\$connected with $connstr\$\$",
-		'-d', "$common_connstr $connstr" ];
+		'-d', "$common_connstr $connstr"
+	];
 
 	command_ok($cmd, $test_name);
 }
@@ -55,7 +56,8 @@ sub test_connect_fails
 	my $cmd = [
 		'psql', '-X', '-A', '-t', '-c',
 		"SELECT \$\$connected with $connstr\$\$",
-		'-d', "$common_connstr $connstr" ];
+		'-d', "$common_connstr $connstr"
+	];
 
 	command_fails_like($cmd, $expected_stderr, $test_name);
 }
diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl
index e550207..71dfbb3 100644
--- a/src/test/ssl/t/001_ssltests.pl
+++ b/src/test/ssl/t/001_ssltests.pl
@@ -62,7 +62,8 @@ close $sslconf;
 
 command_fails(
 	[ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],
-	'restart fails with password-protected key file with wrong password');
+	'restart fails with password-protected key file with wrong password'
+);
 $node->_update_pid(0);
 
 open $sslconf, '>', $node->data_dir . "/sslconfig.conf";
@@ -94,24 +95,28 @@ $common_connstr =
 test_connect_fails(
 	$common_connstr, "sslmode=disable",
 	qr/\Qno pg_hba.conf entry\E/,
-	"server doesn't accept non-SSL connections");
+	"server doesn't accept non-SSL connections"
+);
 
 # Try without a root cert. In sslmode=require, this should work. In verify-ca
 # or verify-full mode it should fail.
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=invalid sslmode=require",
-	"connect without server root cert sslmode=require");
+	"connect without server root cert sslmode=require"
+);
 test_connect_fails(
 	$common_connstr,
 	"sslrootcert=invalid sslmode=verify-ca",
 	qr/root certificate file "invalid" does not exist/,
-	"connect without server root cert sslmode=verify-ca");
+	"connect without server root cert sslmode=verify-ca"
+);
 test_connect_fails(
 	$common_connstr,
 	"sslrootcert=invalid sslmode=verify-full",
 	qr/root certificate file "invalid" does not exist/,
-	"connect without server root cert sslmode=verify-full");
+	"connect without server root cert sslmode=verify-full"
+);
 
 # Try with wrong root cert, should fail. (We're using the client CA as the
 # root, but the server's key is signed by the server CA.)
@@ -135,26 +140,31 @@ test_connect_fails($common_connstr,
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=require",
-	"connect with correct server CA cert file sslmode=require");
+	"connect with correct server CA cert file sslmode=require"
+);
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca",
-	"connect with correct server CA cert file sslmode=verify-ca");
+	"connect with correct server CA cert file sslmode=verify-ca"
+);
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-full",
-	"connect with correct server CA cert file sslmode=verify-full");
+	"connect with correct server CA cert file sslmode=verify-full"
+);
 
 # Test with cert root file that contains two certificates. The client should
 # be able to pick the right one, regardless of the order in the file.
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/both-cas-1.crt sslmode=verify-ca",
-	"cert root file that contains two certificates, order 1");
+	"cert root file that contains two certificates, order 1"
+);
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/both-cas-2.crt sslmode=verify-ca",
-	"cert root file that contains two certificates, order 2");
+	"cert root file that contains two certificates, order 2"
+);
 
 # CRL tests
 
@@ -162,20 +172,23 @@ test_connect_ok(
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=invalid",
-	"sslcrl option with invalid file name");
+	"sslcrl option with invalid file name"
+);
 
 # A CRL belonging to a different CA is not accepted, fails
 test_connect_fails(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/client.crl",
 	qr/SSL error/,
-	"CRL belonging to a different CA");
+	"CRL belonging to a different CA"
+);
 
 # With the correct CRL, succeeds (this cert is not revoked)
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl",
-	"CRL with a non-revoked cert");
+	"CRL with a non-revoked cert"
+);
 
 # Check that connecting with verify-full fails, when the hostname doesn't
 # match the hostname in the server's certificate.
@@ -185,16 +198,19 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"sslmode=require host=wronghost.test",
-	"mismatch between host name and server certificate sslmode=require");
+	"mismatch between host name and server certificate sslmode=require"
+);
 test_connect_ok(
 	$common_connstr,
 	"sslmode=verify-ca host=wronghost.test",
-	"mismatch between host name and server certificate sslmode=verify-ca");
+	"mismatch between host name and server certificate sslmode=verify-ca"
+);
 test_connect_fails(
 	$common_connstr,
 	"sslmode=verify-full host=wronghost.test",
 	qr/\Qserver certificate for "common-name.pg-ssltest.test" does not match host name "wronghost.test"\E/,
-	"mismatch between host name and server certificate sslmode=verify-full");
+	"mismatch between host name and server certificate sslmode=verify-full"
+);
 
 # Test Subject Alternative Names.
 switch_server_cert($node, 'server-multiple-alt-names');
@@ -205,26 +221,31 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"host=dns1.alt-name.pg-ssltest.test",
-	"host name matching with X.509 Subject Alternative Names 1");
+	"host name matching with X.509 Subject Alternative Names 1"
+);
 test_connect_ok(
 	$common_connstr,
 	"host=dns2.alt-name.pg-ssltest.test",
-	"host name matching with X.509 Subject Alternative Names 2");
+	"host name matching with X.509 Subject Alternative Names 2"
+);
 test_connect_ok(
 	$common_connstr,
 	"host=foo.wildcard.pg-ssltest.test",
-	"host name matching with X.509 Subject Alternative Names wildcard");
+	"host name matching with X.509 Subject Alternative Names wildcard"
+);
 
 test_connect_fails(
 	$common_connstr,
 	"host=wronghost.alt-name.pg-ssltest.test",
 	qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 2 other names) does not match host name "wronghost.alt-name.pg-ssltest.test"\E/,
-	"host name not matching with X.509 Subject Alternative Names");
+	"host name not matching with X.509 Subject Alternative Names"
+);
 test_connect_fails(
 	$common_connstr,
 	"host=deep.subdomain.wildcard.pg-ssltest.test",
 	qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 2 other names) does not match host name "deep.subdomain.wildcard.pg-ssltest.test"\E/,
-	"host name not matching with X.509 Subject Alternative Names wildcard");
+	"host name not matching with X.509 Subject Alternative Names wildcard"
+);
 
 # Test certificate with a single Subject Alternative Name. (this gives a
 # slightly different error message, that's all)
@@ -236,13 +257,15 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"host=single.alt-name.pg-ssltest.test",
-	"host name matching with a single X.509 Subject Alternative Name");
+	"host name matching with a single X.509 Subject Alternative Name"
+);
 
 test_connect_fails(
 	$common_connstr,
 	"host=wronghost.alt-name.pg-ssltest.test",
 	qr/\Qserver certificate for "single.alt-name.pg-ssltest.test" does not match host name "wronghost.alt-name.pg-ssltest.test"\E/,
-	"host name not matching with a single X.509 Subject Alternative Name");
+	"host name not matching with a single X.509 Subject Alternative Name"
+);
 test_connect_fails(
 	$common_connstr,
 	"host=deep.subdomain.wildcard.pg-ssltest.test",
@@ -260,16 +283,19 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"host=dns1.alt-name.pg-ssltest.test",
-	"certificate with both a CN and SANs 1");
+	"certificate with both a CN and SANs 1"
+);
 test_connect_ok(
 	$common_connstr,
 	"host=dns2.alt-name.pg-ssltest.test",
-	"certificate with both a CN and SANs 2");
+	"certificate with both a CN and SANs 2"
+);
 test_connect_fails(
 	$common_connstr,
 	"host=common-name.pg-ssltest.test",
 	qr/\Qserver certificate for "dns1.alt-name.pg-ssltest.test" (and 1 other name) does not match host name "common-name.pg-ssltest.test"\E/,
-	"certificate with both a CN and SANs ignores CN");
+	"certificate with both a CN and SANs ignores CN"
+);
 
 # Finally, test a server certificate that has no CN or SANs. Of course, that's
 # not a very sensible certificate, but libpq should handle it gracefully.
@@ -280,12 +306,14 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"sslmode=verify-ca host=common-name.pg-ssltest.test",
-	"server certificate without CN or SANs sslmode=verify-ca");
+	"server certificate without CN or SANs sslmode=verify-ca"
+);
 test_connect_fails(
 	$common_connstr,
 	"sslmode=verify-full host=common-name.pg-ssltest.test",
 	qr/could not get server's host name from server certificate/,
-	"server certificate without CN or SANs sslmode=verify-full");
+	"server certificate without CN or SANs sslmode=verify-full"
+);
 
 # Test that the CRL works
 switch_server_cert($node, 'server-revoked');
@@ -297,12 +325,14 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca",
-	"connects without client-side CRL");
+	"connects without client-side CRL"
+);
 test_connect_fails(
 	$common_connstr,
 	"sslrootcert=ssl/root+server_ca.crt sslmode=verify-ca sslcrl=ssl/root+server.crl",
 	qr/SSL error/,
-	"does not connect with client-side CRL");
+	"does not connect with client-side CRL"
+);
 
 ### Server-side tests.
 ###
@@ -318,20 +348,23 @@ test_connect_fails(
 	$common_connstr,
 	"user=ssltestuser sslcert=invalid",
 	qr/connection requires a valid client certificate/,
-	"certificate authorization fails without client cert");
+	"certificate authorization fails without client cert"
+);
 
 # correct client cert
 test_connect_ok(
 	$common_connstr,
 	"user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_tmp.key",
-	"certificate authorization succeeds with correct client cert");
+	"certificate authorization succeeds with correct client cert"
+);
 
 # client key with wrong permissions
 test_connect_fails(
 	$common_connstr,
 	"user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client_wrongperms_tmp.key",
 	qr!\Qprivate key file "ssl/client_wrongperms_tmp.key" has group or world access\E!,
-	"certificate authorization fails because of file permissions");
+	"certificate authorization fails because of file permissions"
+);
 
 # client cert belonging to another user
 test_connect_fails(
@@ -346,7 +379,8 @@ test_connect_fails(
 	$common_connstr,
 	"user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked_tmp.key",
 	qr/SSL error/,
-	"certificate authorization fails with revoked client cert");
+	"certificate authorization fails with revoked client cert"
+);
 
 # intermediate client_ca.crt is provided by client, and isn't in server's ssl_ca_file
 switch_server_cert($node, 'server-cn-only', 'root_ca');
@@ -356,7 +390,8 @@ $common_connstr =
 test_connect_ok(
 	$common_connstr,
 	"sslmode=require sslcert=ssl/client+client_ca.crt",
-	"intermediate client certificate is provided by client");
+	"intermediate client certificate is provided by client"
+);
 test_connect_fails($common_connstr, "sslmode=require sslcert=ssl/client.crt",
 	qr/SSL error/, "intermediate client certificate is missing");
 
diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl
index 52a8f45..b5c34ec 100644
--- a/src/test/ssl/t/002_scram.pl
+++ b/src/test/ssl/t/002_scram.pl
@@ -53,7 +53,8 @@ test_connect_ok($common_connstr, '',
 test_connect_ok(
 	$common_connstr,
 	"scram_channel_binding=tls-unique",
-	"SCRAM authentication with tls-unique as channel binding");
+	"SCRAM authentication with tls-unique as channel binding"
+);
 test_connect_ok($common_connstr, "scram_channel_binding=''",
 	"SCRAM authentication without channel binding");
 if ($supports_tls_server_end_point)
@@ -61,7 +62,8 @@ if ($supports_tls_server_end_point)
 	test_connect_ok(
 		$common_connstr,
 		"scram_channel_binding=tls-server-end-point",
-		"SCRAM authentication with tls-server-end-point as channel binding");
+		"SCRAM authentication with tls-server-end-point as channel binding"
+	);
 }
 else
 {
@@ -69,13 +71,15 @@ else
 		$common_connstr,
 		"scram_channel_binding=tls-server-end-point",
 		qr/channel binding type "tls-server-end-point" is not supported by this build/,
-		"SCRAM authentication with tls-server-end-point as channel binding");
+		"SCRAM authentication with tls-server-end-point as channel binding"
+	);
 	$number_of_tests++;
 }
 test_connect_fails(
 	$common_connstr,
 	"scram_channel_binding=not-exists",
 	qr/unsupported SCRAM channel-binding type/,
-	"SCRAM authentication with invalid channel binding");
+	"SCRAM authentication with invalid channel binding"
+);
 
 done_testing($number_of_tests);
diff --git a/src/test/subscription/t/001_rep_changes.pl b/src/test/subscription/t/001_rep_changes.pl
index 503556f..4fcb22d 100644
--- a/src/test/subscription/t/001_rep_changes.pl
+++ b/src/test/subscription/t/001_rep_changes.pl
@@ -115,8 +115,10 @@ is($result, qq(20|-20|-1), 'check replicated changes on subscriber');
 
 $result =
   $node_subscriber->safe_psql('postgres', "SELECT c, b, a FROM tab_mixed");
-is( $result, qq(|foo|1
-|bar|2), 'check replicated changes with different column order');
+is(
+	$result, qq(|foo|1
+|bar|2), 'check replicated changes with different column order'
+);
 
 $result = $node_subscriber->safe_psql('postgres',
 	"SELECT count(*), min(a), max(a) FROM tab_include");
@@ -155,10 +157,12 @@ is($result, qq(20|1|100),
 
 $result = $node_subscriber->safe_psql('postgres',
 	"SELECT x FROM tab_full2 ORDER BY 1");
-is( $result, qq(a
+is(
+	$result, qq(a
 bb
 bb),
-	'update works with REPLICA IDENTITY FULL and text datums');
+	'update works with REPLICA IDENTITY FULL and text datums'
+);
 
 # check that change of connection string and/or publication list causes
 # restart of subscription workers. Not all of these are registered as tests
diff --git a/src/test/subscription/t/002_types.pl b/src/test/subscription/t/002_types.pl
index a49e56f..12bacbc 100644
--- a/src/test/subscription/t/002_types.pl
+++ b/src/test/subscription/t/002_types.pl
@@ -240,7 +240,8 @@ $node_publisher->safe_psql(
 		(2, '"zzz"=>"foo"'),
 		(3, '"123"=>"321"'),
 		(4, '"yellow horse"=>"moaned"');
-));
+)
+);
 
 $node_publisher->wait_for_catchup($appname);
 
@@ -262,9 +263,11 @@ my $result = $node_subscriber->safe_psql(
 	SELECT a, b FROM tst_range ORDER BY a;
 	SELECT a, b, c FROM tst_range_array ORDER BY a;
 	SELECT a, b FROM tst_hstore ORDER BY a;
-));
+)
+);
 
-is( $result, '1|{1,2,3}
+is(
+	$result, '1|{1,2,3}
 2|{2,3,1}
 3|{3,2,1}
 4|{4,3,2}
@@ -329,7 +332,8 @@ e|{d,NULL}
 2|"zzz"=>"foo"
 3|"123"=>"321"
 4|"yellow horse"=>"moaned"',
-	'check replicated inserts on subscriber');
+	'check replicated inserts on subscriber'
+);
 
 # Run batch of updates
 $node_publisher->safe_psql(
@@ -361,7 +365,8 @@ $node_publisher->safe_psql(
 	UPDATE tst_range_array SET b = tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz, 'infinity'), c = '{NULL, "[11,9999999]"}' WHERE a > 3;
 	UPDATE tst_hstore SET b = '"updated"=>"value"' WHERE a < 3;
 	UPDATE tst_hstore SET b = '"also"=>"updated"' WHERE a = 3;
-));
+)
+);
 
 $node_publisher->wait_for_catchup($appname);
 
@@ -383,9 +388,11 @@ $result = $node_subscriber->safe_psql(
 	SELECT a, b FROM tst_range ORDER BY a;
 	SELECT a, b, c FROM tst_range_array ORDER BY a;
 	SELECT a, b FROM tst_hstore ORDER BY a;
-));
+)
+);
 
-is( $result, '1|{4,5,6}
+is(
+	$result, '1|{4,5,6}
 2|{2,3,1}
 3|{3,2,1}
 4|{4,5,6,1}
@@ -450,7 +457,8 @@ e|{e,d}
 2|"updated"=>"value"
 3|"also"=>"updated"
 4|"yellow horse"=>"moaned"',
-	'check replicated updates on subscriber');
+	'check replicated updates on subscriber'
+);
 
 # Run batch of deletes
 $node_publisher->safe_psql(
@@ -481,7 +489,8 @@ $node_publisher->safe_psql(
 	DELETE FROM tst_range_array WHERE a = 1;
 	DELETE FROM tst_range_array WHERE tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz, 'Mon Aug 05 00:00:00 2014 CEST'::timestamptz) && b;
 	DELETE FROM tst_hstore WHERE a = 1;
-));
+)
+);
 
 $node_publisher->wait_for_catchup($appname);
 
@@ -503,9 +512,11 @@ $result = $node_subscriber->safe_psql(
 	SELECT a, b FROM tst_range ORDER BY a;
 	SELECT a, b, c FROM tst_range_array ORDER BY a;
 	SELECT a, b FROM tst_hstore ORDER BY a;
-));
+)
+);
 
-is( $result, '3|{3,2,1}
+is(
+	$result, '3|{3,2,1}
 4|{4,5,6,1}
 5|{4,5,6,1}
 {3,1,2}|{c,a,b}|{3.3,1.1,2.2}|{"3 years","1 year","2 years"}
@@ -539,7 +550,8 @@ e|{e,d}
 2|"updated"=>"value"
 3|"also"=>"updated"
 4|"yellow horse"=>"moaned"',
-	'check replicated deletes on subscriber');
+	'check replicated deletes on subscriber'
+);
 
 $node_subscriber->stop('fast');
 $node_publisher->stop('fast');
diff --git a/src/test/subscription/t/003_constraints.pl b/src/test/subscription/t/003_constraints.pl
index a5b548e..90edfd4 100644
--- a/src/test/subscription/t/003_constraints.pl
+++ b/src/test/subscription/t/003_constraints.pl
@@ -92,7 +92,8 @@ CREATE TRIGGER filter_basic_dml_trg
     BEFORE INSERT ON tab_fk_ref
     FOR EACH ROW EXECUTE PROCEDURE filter_basic_dml_fn();
 ALTER TABLE tab_fk_ref ENABLE REPLICA TRIGGER filter_basic_dml_trg;
-});
+}
+);
 
 # Insert data
 $node_publisher->safe_psql('postgres',
diff --git a/src/test/subscription/t/005_encoding.pl b/src/test/subscription/t/005_encoding.pl
index 1977aa5..949ae72 100644
--- a/src/test/subscription/t/005_encoding.pl
+++ b/src/test/subscription/t/005_encoding.pl
@@ -8,13 +8,15 @@ use Test::More tests => 1;
 my $node_publisher = get_new_node('publisher');
 $node_publisher->init(
 	allows_streaming => 'logical',
-	extra            => [ '--locale=C', '--encoding=UTF8' ]);
+	extra            => [ '--locale=C', '--encoding=UTF8' ]
+);
 $node_publisher->start;
 
 my $node_subscriber = get_new_node('subscriber');
 $node_subscriber->init(
 	allows_streaming => 'logical',
-	extra            => [ '--locale=C', '--encoding=LATIN1' ]);
+	extra            => [ '--locale=C', '--encoding=LATIN1' ]
+);
 $node_subscriber->start;
 
 my $ddl = "CREATE TABLE test1 (a int, b text);";
@@ -43,11 +45,13 @@ $node_publisher->safe_psql('postgres',
 
 $node_publisher->wait_for_catchup($appname);
 
-is( $node_subscriber->safe_psql(
+is(
+	$node_subscriber->safe_psql(
 		'postgres', q{SELECT a FROM test1 WHERE b = E'Mot\xf6rhead'}
 	),                                                     # LATIN1
 	qq(1),
-	'data replicated to subscriber');
+	'data replicated to subscriber'
+);
 
 $node_subscriber->stop;
 $node_publisher->stop;
diff --git a/src/test/subscription/t/006_rewrite.pl b/src/test/subscription/t/006_rewrite.pl
index e470c07..9cd14fe 100644
--- a/src/test/subscription/t/006_rewrite.pl
+++ b/src/test/subscription/t/006_rewrite.pl
@@ -39,10 +39,12 @@ $node_publisher->safe_psql('postgres',
 
 $node_publisher->wait_for_catchup($appname);
 
-is( $node_subscriber->safe_psql('postgres', q{SELECT a, b FROM test1}),
+is(
+	$node_subscriber->safe_psql('postgres', q{SELECT a, b FROM test1}),
 	qq(1|one
 2|two),
-	'initial data replicated to subscriber');
+	'initial data replicated to subscriber'
+);
 
 # DDL that causes a heap rewrite
 my $ddl2 = "ALTER TABLE test1 ADD c int NOT NULL DEFAULT 0;";
@@ -56,11 +58,13 @@ $node_publisher->safe_psql('postgres',
 
 $node_publisher->wait_for_catchup($appname);
 
-is( $node_subscriber->safe_psql('postgres', q{SELECT a, b, c FROM test1}),
+is(
+	$node_subscriber->safe_psql('postgres', q{SELECT a, b, c FROM test1}),
 	qq(1|one|0
 2|two|0
 3|three|33),
-	'data replicated to subscriber');
+	'data replicated to subscriber'
+);
 
 $node_subscriber->stop;
 $node_publisher->stop;
diff --git a/src/test/subscription/t/007_ddl.pl b/src/test/subscription/t/007_ddl.pl
index 2697ee5..1110522 100644
--- a/src/test/subscription/t/007_ddl.pl
+++ b/src/test/subscription/t/007_ddl.pl
@@ -35,7 +35,8 @@ ALTER SUBSCRIPTION mysub DISABLE;
 ALTER SUBSCRIPTION mysub SET (slot_name = NONE);
 DROP SUBSCRIPTION mysub;
 COMMIT;
-});
+}
+);
 
 pass "subscription disable and drop in same transaction did not hang";
 
diff --git a/src/tools/git_changelog b/src/tools/git_changelog
index 352dc1c..9f95cc5 100755
--- a/src/tools/git_changelog
+++ b/src/tools/git_changelog
@@ -83,7 +83,8 @@ Getopt::Long::GetOptions(
 	'non-master-only' => \$non_master_only,
 	'post-date'       => \$post_date,
 	'oldest-first'    => \$oldest_first,
-	'since=s'         => \$since) || usage();
+	'since=s'         => \$since
+) || usage();
 usage() if @ARGV;
 
 my @git = qw(git log --format=fuller --date=iso);
@@ -152,7 +153,8 @@ for my $branch (@BRANCHES)
 				'branch'   => $branch,
 				'commit'   => $1,
 				'last_tag' => $last_tag,
-				'message'  => '',);
+				'message'  => '',
+			);
 			if ($line =~ /^commit\s+\S+\s+(\S+)/)
 			{
 				$last_parent = $1;
@@ -317,7 +319,8 @@ sub push_commit
 			'message'   => $c->{'message'},
 			'commit'    => $c->{'commit'},
 			'commits'   => [],
-			'timestamp' => $ts };
+			'timestamp' => $ts
+		};
 		push @{ $all_commits{$ht} }, $cc;
 	}
 
@@ -326,7 +329,8 @@ sub push_commit
 		'branch'   => $c->{'branch'},
 		'commit'   => $c->{'commit'},
 		'date'     => $c->{'date'},
-		'last_tag' => $c->{'last_tag'} };
+		'last_tag' => $c->{'last_tag'}
+	};
 	push @{ $cc->{'commits'} }, $smallc;
 	push @{ $all_commits_by_branch{ $c->{'branch'} } }, $cc;
 	$cc->{'branch_position'}{ $c->{'branch'} } =
@@ -380,7 +384,8 @@ sub output_details
 				"%s [%s] %s\n",
 				substr($c->{'date'},   0, 10),
 				substr($c->{'commit'}, 0, 9),
-				substr($1,             0, 56));
+				substr($1,             0, 56)
+			);
 		}
 		else
 		{
diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm
index 884c330..9da57ea 100644
--- a/src/tools/msvc/Install.pm
+++ b/src/tools/msvc/Install.pm
@@ -25,7 +25,8 @@ my @client_program_files = (
 	'libpgtypes',     'libpq',      'pg_basebackup', 'pg_config',
 	'pg_dump',        'pg_dumpall', 'pg_isready',    'pg_receivewal',
 	'pg_recvlogical', 'pg_restore', 'psql',          'reindexdb',
-	'vacuumdb',       @client_contribs);
+	'vacuumdb',       @client_contribs
+);
 
 sub lcopy
 {
@@ -80,7 +81,8 @@ sub Install
 	my @client_dirs = ('bin', 'lib', 'share', 'symbols');
 	my @all_dirs = (
 		@client_dirs, 'doc', 'doc/contrib', 'doc/extension', 'share/contrib',
-		'share/extension', 'share/timezonesets', 'share/tsearch_data');
+		'share/extension', 'share/timezonesets', 'share/tsearch_data'
+	);
 	if ($insttype eq "client")
 	{
 		EnsureDirectories($target, @client_dirs);
@@ -95,7 +97,8 @@ sub Install
 	my @top_dir      = ("src");
 	@top_dir = ("src\\bin", "src\\interfaces") if ($insttype eq "client");
 	File::Find::find(
-		{   wanted => sub {
+		{
+			wanted => sub {
 				/^.*\.sample\z/s
 				  && push(@$sample_files, $File::Find::name);
 
@@ -103,13 +106,15 @@ sub Install
 				$_ eq 'share' and $File::Find::prune = 1;
 			}
 		},
-		@top_dir);
+		@top_dir
+	);
 	CopySetOfFiles('config files', $sample_files, $target . '/share/');
 	CopyFiles(
 		'Import libraries',
 		$target . '/lib/',
 		"$conf\\", "postgres\\postgres.lib", "libpgcommon\\libpgcommon.lib",
-		"libpgport\\libpgport.lib");
+		"libpgport\\libpgport.lib"
+	);
 	CopyContribFiles($config, $target);
 	CopyIncludeFiles($target);
 
@@ -118,36 +123,44 @@ sub Install
 		CopySetOfFiles(
 			'timezone names',
 			[ glob('src\timezone\tznames\*.txt') ],
-			$target . '/share/timezonesets/');
+			$target . '/share/timezonesets/'
+		);
 		CopyFiles(
 			'timezone sets',
 			$target . '/share/timezonesets/',
-			'src/timezone/tznames/', 'Default', 'Australia', 'India');
+			'src/timezone/tznames/', 'Default', 'Australia', 'India'
+		);
 		CopySetOfFiles(
 			'BKI files',
 			[ glob("src\\backend\\catalog\\postgres.*") ],
-			$target . '/share/');
+			$target . '/share/'
+		);
 		CopySetOfFiles(
 			'SQL files',
 			[ glob("src\\backend\\catalog\\*.sql") ],
-			$target . '/share/');
+			$target . '/share/'
+		);
 		CopyFiles(
 			'Information schema data', $target . '/share/',
-			'src/backend/catalog/',    'sql_features.txt');
+			'src/backend/catalog/',    'sql_features.txt'
+		);
 		CopyFiles(
 			'Error code data',    $target . '/share/',
-			'src/backend/utils/', 'errcodes.txt');
+			'src/backend/utils/', 'errcodes.txt'
+		);
 		GenerateConversionScript($target);
 		GenerateTimezoneFiles($target, $conf);
 		GenerateTsearchFiles($target);
 		CopySetOfFiles(
 			'Stopword files',
 			[ glob("src\\backend\\snowball\\stopwords\\*.stop") ],
-			$target . '/share/tsearch_data/');
+			$target . '/share/tsearch_data/'
+		);
 		CopySetOfFiles(
 			'Dictionaries sample files',
 			[ glob("src\\backend\\tsearch\\dicts\\*_sample*") ],
-			$target . '/share/tsearch_data/');
+			$target . '/share/tsearch_data/'
+		);
 
 		my $pl_extension_files = [];
 		my @pldirs             = ('src/pl/plpgsql/src');
@@ -155,7 +168,8 @@ sub Install
 		push @pldirs, "src/pl/plpython" if $config->{python};
 		push @pldirs, "src/pl/tcl"      if $config->{tcl};
 		File::Find::find(
-			{   wanted => sub {
+			{
+				wanted => sub {
 					/^(.*--.*\.sql|.*\.control)\z/s
 					  && push(@$pl_extension_files, $File::Find::name);
 
@@ -163,7 +177,8 @@ sub Install
 					$_ eq 'share' and $File::Find::prune = 1;
 				}
 			},
-			@pldirs);
+			@pldirs
+		);
 		CopySetOfFiles('PL Extension files',
 			$pl_extension_files, $target . '/share/extension/');
 	}
@@ -257,10 +272,14 @@ sub CopySolutionOutput
 		# Check if this project uses a shared library by looking if
 		# SO_MAJOR_VERSION is defined in its Makefile, whose path
 		# can be found using the resource file of this project.
-		if ((      $vcproj eq 'vcxproj'
-				&& $proj =~ qr{ResourceCompile\s*Include="([^"]+)"})
+		if (
+			(
+				   $vcproj eq 'vcxproj'
+				&& $proj =~ qr{ResourceCompile\s*Include="([^"]+)"}
+			)
 			|| (   $vcproj eq 'vcproj'
-				&& $proj =~ qr{File\s*RelativePath="([^\"]+)\.rc"}))
+				&& $proj =~ qr{File\s*RelativePath="([^\"]+)\.rc"})
+		  )
 		{
 			my $projpath = dirname($1);
 			my $mfname =
@@ -462,11 +481,11 @@ sub CopyContribFiles
 		while (my $d = readdir($D))
 		{
 			# These configuration-based exclusions must match vcregress.pl
-			next if ($d eq "uuid-ossp"       && !defined($config->{uuid}));
-			next if ($d eq "sslinfo"         && !defined($config->{openssl}));
-			next if ($d eq "xml2"            && !defined($config->{xml}));
-			next if ($d =~ /_plperl$/        && !defined($config->{perl}));
-			next if ($d =~ /_plpython$/      && !defined($config->{python}));
+			next if ($d eq "uuid-ossp"  && !defined($config->{uuid}));
+			next if ($d eq "sslinfo"    && !defined($config->{openssl}));
+			next if ($d eq "xml2"       && !defined($config->{xml}));
+			next if ($d =~ /_plperl$/   && !defined($config->{perl}));
+			next if ($d =~ /_plpython$/ && !defined($config->{python}));
 			next if ($d eq "sepgsql");
 
 			CopySubdirFiles($subdir, $d, $config, $target);
@@ -574,7 +593,8 @@ sub ParseAndCleanRule
 		for (
 			$i = index($flist, '$(addsuffix ') + 12;
 			$i < length($flist);
-			$i++)
+			$i++
+		  )
 		{
 			$pcount++ if (substr($flist, $i, 1) eq '(');
 			$pcount-- if (substr($flist, $i, 1) eq ')');
@@ -599,23 +619,27 @@ sub CopyIncludeFiles
 		'src/include/',   'postgres_ext.h',
 		'pg_config.h',    'pg_config_ext.h',
 		'pg_config_os.h', 'dynloader.h',
-		'pg_config_manual.h');
+		'pg_config_manual.h'
+	);
 	lcopy('src/include/libpq/libpq-fs.h', $target . '/include/libpq/')
 	  || croak 'Could not copy libpq-fs.h';
 
 	CopyFiles(
 		'Libpq headers',
 		$target . '/include/',
-		'src/interfaces/libpq/', 'libpq-fe.h', 'libpq-events.h');
+		'src/interfaces/libpq/', 'libpq-fe.h', 'libpq-events.h'
+	);
 	CopyFiles(
 		'Libpq internal headers',
 		$target . '/include/internal/',
-		'src/interfaces/libpq/', 'libpq-int.h', 'pqexpbuffer.h');
+		'src/interfaces/libpq/', 'libpq-int.h', 'pqexpbuffer.h'
+	);
 
 	CopyFiles(
 		'Internal headers',
 		$target . '/include/internal/',
-		'src/include/', 'c.h', 'port.h', 'postgres_fe.h');
+		'src/include/', 'c.h', 'port.h', 'postgres_fe.h'
+	);
 	lcopy('src/include/libpq/pqcomm.h', $target . '/include/internal/libpq/')
 	  || croak 'Could not copy pqcomm.h';
 
@@ -623,22 +647,26 @@ sub CopyIncludeFiles
 		'Server headers',
 		$target . '/include/server/',
 		'src/include/', 'pg_config.h', 'pg_config_ext.h', 'pg_config_os.h',
-		'dynloader.h');
+		'dynloader.h'
+	);
 	CopyFiles(
 		'Grammar header',
 		$target . '/include/server/parser/',
-		'src/backend/parser/', 'gram.h');
+		'src/backend/parser/', 'gram.h'
+	);
 	CopySetOfFiles(
 		'',
 		[ glob("src\\include\\*.h") ],
-		$target . '/include/server/');
+		$target . '/include/server/'
+	);
 	my $D;
 	opendir($D, 'src/include') || croak "Could not opendir on src/include!\n";
 
 	CopyFiles(
 		'PL/pgSQL header',
 		$target . '/include/server/',
-		'src/pl/plpgsql/src/', 'plpgsql.h');
+		'src/pl/plpgsql/src/', 'plpgsql.h'
+	);
 
 	# some xcopy progs don't like mixed slash style paths
 	(my $ctarget = $target) =~ s!/!\\!g;
@@ -652,7 +680,8 @@ sub CopyIncludeFiles
 		EnsureDirectories("$target/include/server/$d");
 		my @args = (
 			'xcopy', '/s', '/i', '/q', '/r', '/y', "src\\include\\$d\\*.h",
-			"$ctarget\\include\\server\\$d\\");
+			"$ctarget\\include\\server\\$d\\"
+		);
 		system(@args) && croak("Failed to copy include directory $d\n");
 	}
 	closedir($D);
@@ -665,7 +694,8 @@ sub CopyIncludeFiles
 		'ECPG headers',
 		$target . '/include/',
 		'src/interfaces/ecpg/include/',
-		'ecpg_config.h', split /\s+/, $1);
+		'ecpg_config.h', split /\s+/, $1
+	);
 	$mf =~ /^informix_headers\s*=\s*(.*)$/m
 	  || croak "Could not find informix_headers line\n";
 	EnsureDirectories($target . '/include', 'informix', 'informix/esql');
@@ -673,7 +703,8 @@ sub CopyIncludeFiles
 		'ECPG informix headers',
 		$target . '/include/informix/esql/',
 		'src/interfaces/ecpg/include/',
-		split /\s+/, $1);
+		split /\s+/, $1
+	);
 }
 
 sub GenerateNLSFiles
@@ -686,12 +717,14 @@ sub GenerateNLSFiles
 	EnsureDirectories($target, "share/locale");
 	my @flist;
 	File::Find::find(
-		{   wanted => sub {
+		{
+			wanted => sub {
 				/^nls\.mk\z/s
 				  && !push(@flist, $File::Find::name);
 			}
 		},
-		"src");
+		"src"
+	);
 	foreach (@flist)
 	{
 		my $prgm = DetermineCatalogName($_);
@@ -710,7 +743,8 @@ sub GenerateNLSFiles
 				"$nlspath\\bin\\msgfmt",
 				'-o',
 				"$target\\share\\locale\\$lang\\LC_MESSAGES\\$prgm-$majorver.mo",
-				$_);
+				$_
+			);
 			system(@args) && croak("Could not run msgfmt on $dir\\$_");
 			print ".";
 		}
diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm
index ca6e8e5..5807c0e 100644
--- a/src/tools/msvc/MSBuildProject.pm
+++ b/src/tools/msvc/MSBuildProject.pm
@@ -65,17 +65,23 @@ EOF
 
 	$self->WriteItemDefinitionGroup(
 		$f, 'Debug',
-		{   defs    => "_DEBUG;DEBUG=1",
+		{
+			defs    => "_DEBUG;DEBUG=1",
 			opt     => 'Disabled',
 			strpool => 'false',
-			runtime => 'MultiThreadedDebugDLL' });
+			runtime => 'MultiThreadedDebugDLL'
+		}
+	);
 	$self->WriteItemDefinitionGroup(
 		$f,
 		'Release',
-		{   defs    => "",
+		{
+			defs    => "",
 			opt     => 'Full',
 			strpool => 'true',
-			runtime => 'MultiThreadedDLL' });
+			runtime => 'MultiThreadedDLL'
+		}
+	);
 }
 
 sub AddDefine
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index b2f5fd6..47b1ec8 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -39,7 +39,8 @@ my $contrib_extralibs      = undef;
 my $contrib_extraincludes = { 'dblink' => ['src/backend'] };
 my $contrib_extrasource = {
 	'cube' => [ 'contrib/cube/cubescan.l', 'contrib/cube/cubeparse.y' ],
-	'seg'  => [ 'contrib/seg/segscan.l',   'contrib/seg/segparse.y' ], };
+	'seg'  => [ 'contrib/seg/segscan.l',   'contrib/seg/segparse.y' ],
+};
 my @contrib_excludes = (
 	'commit_ts',       'hstore_plperl',
 	'hstore_plpython', 'intagg',
@@ -47,7 +48,8 @@ my @contrib_excludes = (
 	'ltree_plpython',  'pgcrypto',
 	'sepgsql',         'brin',
 	'test_extensions', 'test_pg_dump',
-	'snapshot_too_old');
+	'snapshot_too_old'
+);
 
 # Set of variables for frontend modules
 my $frontend_defines = { 'initdb' => 'FRONTEND' };
@@ -55,26 +57,32 @@ my @frontend_uselibpq = ('pg_ctl', 'pg_upgrade', 'pgbench', 'psql', 'initdb');
 my @frontend_uselibpgport = (
 	'pg_archivecleanup', 'pg_test_fsync',
 	'pg_test_timing',    'pg_upgrade',
-	'pg_waldump',        'pgbench');
+	'pg_waldump',        'pgbench'
+);
 my @frontend_uselibpgcommon = (
 	'pg_archivecleanup', 'pg_test_fsync',
 	'pg_test_timing',    'pg_upgrade',
-	'pg_waldump',        'pgbench');
+	'pg_waldump',        'pgbench'
+);
 my $frontend_extralibs = {
 	'initdb'     => ['ws2_32.lib'],
 	'pg_restore' => ['ws2_32.lib'],
 	'pgbench'    => ['ws2_32.lib'],
-	'psql'       => ['ws2_32.lib'] };
+	'psql'       => ['ws2_32.lib']
+};
 my $frontend_extraincludes = {
 	'initdb' => ['src/timezone'],
-	'psql'   => ['src/backend'] };
+	'psql'   => ['src/backend']
+};
 my $frontend_extrasource = {
 	'psql' => ['src/bin/psql/psqlscanslash.l'],
 	'pgbench' =>
-	  [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ] };
+	  [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ]
+};
 my @frontend_excludes = (
 	'pgevent',    'pg_basebackup', 'pg_rewind', 'pg_dump',
-	'pg_waldump', 'scripts');
+	'pg_waldump', 'scripts'
+);
 
 sub mkvcbuild
 {
@@ -127,7 +135,8 @@ sub mkvcbuild
 
 	our @pgcommonfrontendfiles = (
 		@pgcommonallfiles, qw(fe_memutils.c file_utils.c
-		  restricted_token.c));
+		  restricted_token.c)
+	);
 
 	our @pgcommonbkndfiles = @pgcommonallfiles;
 
@@ -153,7 +162,8 @@ sub mkvcbuild
 	$postgres->AddFile('src/backend/utils/fmgrtab.c');
 	$postgres->ReplaceFile(
 		'src/backend/port/dynloader.c',
-		'src/backend/port/dynloader/win32.c');
+		'src/backend/port/dynloader/win32.c'
+	);
 	$postgres->ReplaceFile('src/backend/port/pg_sema.c',
 		'src/backend/port/win32_sema.c');
 	$postgres->ReplaceFile('src/backend/port/pg_shmem.c',
@@ -172,7 +182,8 @@ sub mkvcbuild
 	$postgres->AddFiles(
 		'src/backend/replication', 'repl_scanner.l',
 		'repl_gram.y',             'syncrep_scanner.l',
-		'syncrep_gram.y');
+		'syncrep_gram.y'
+	);
 	$postgres->AddDefine('BUILDING_DLL');
 	$postgres->AddLibrary('secur32.lib');
 	$postgres->AddLibrary('ws2_32.lib');
@@ -195,7 +206,8 @@ sub mkvcbuild
 		'src/backend/snowball/libstemmer',
 		sub {
 			return shift !~ /(dict_snowball.c|win32ver.rc)$/;
-		});
+		}
+	);
 	$snowball->AddIncludeDir('src/include/snowball');
 	$snowball->AddReference($postgres);
 
@@ -265,7 +277,8 @@ sub mkvcbuild
 
 	my $pgtypes = $solution->AddProject(
 		'libpgtypes', 'dll',
-		'interfaces', 'src/interfaces/ecpg/pgtypeslib');
+		'interfaces', 'src/interfaces/ecpg/pgtypeslib'
+	);
 	$pgtypes->AddDefine('FRONTEND');
 	$pgtypes->AddReference($libpgport);
 	$pgtypes->UseDef('src/interfaces/ecpg/pgtypeslib/pgtypeslib.def');
@@ -283,7 +296,8 @@ sub mkvcbuild
 
 	my $libecpgcompat = $solution->AddProject(
 		'libecpg_compat', 'dll',
-		'interfaces',     'src/interfaces/ecpg/compatlib');
+		'interfaces',     'src/interfaces/ecpg/compatlib'
+	);
 	$libecpgcompat->AddDefine('FRONTEND');
 	$libecpgcompat->AddIncludeDir('src/interfaces/ecpg/include');
 	$libecpgcompat->AddIncludeDir('src/interfaces/libpq');
@@ -439,7 +453,8 @@ sub mkvcbuild
 		'pgp-info.c',       'pgp-mpi.c',
 		'pgp-pubdec.c',     'pgp-pubenc.c',
 		'pgp-pubkey.c',     'pgp-s2k.c',
-		'pgp-pgsql.c');
+		'pgp-pgsql.c'
+	);
 	if ($solution->{options}->{openssl})
 	{
 		$pgcrypto->AddFiles('contrib/pgcrypto', 'openssl.c',
@@ -452,7 +467,8 @@ sub mkvcbuild
 			'sha1.c',           'internal.c',
 			'internal-sha2.c',  'blf.c',
 			'rijndael.c',       'pgp-mpi-internal.c',
-			'imath.c');
+			'imath.c'
+		);
 	}
 	$pgcrypto->AddReference($postgres);
 	$pgcrypto->AddLibrary('ws2_32.lib');
@@ -504,18 +520,21 @@ sub mkvcbuild
 		my $hstore_plpython = AddTransformModule(
 			'hstore_plpython' . $pymajorver, 'contrib/hstore_plpython',
 			'plpython' . $pymajorver,        'src/pl/plpython',
-			'hstore',                        'contrib/hstore');
+			'hstore',                        'contrib/hstore'
+		);
 		$hstore_plpython->AddDefine(
 			'PLPYTHON_LIBNAME="plpython' . $pymajorver . '"');
 		my $jsonb_plpython = AddTransformModule(
 			'jsonb_plpython' . $pymajorver, 'contrib/jsonb_plpython',
-			'plpython' . $pymajorver,       'src/pl/plpython');
+			'plpython' . $pymajorver,       'src/pl/plpython'
+		);
 		$jsonb_plpython->AddDefine(
 			'PLPYTHON_LIBNAME="plpython' . $pymajorver . '"');
 		my $ltree_plpython = AddTransformModule(
 			'ltree_plpython' . $pymajorver, 'contrib/ltree_plpython',
 			'plpython' . $pymajorver,       'src/pl/plpython',
-			'ltree',                        'contrib/ltree');
+			'ltree',                        'contrib/ltree'
+		);
 		$ltree_plpython->AddDefine(
 			'PLPYTHON_LIBNAME="plpython' . $pymajorver . '"');
 	}
@@ -626,7 +645,8 @@ sub mkvcbuild
 					(map { "-D$_" } @perl_embed_ccflags, $define || ()),
 					$source_file,
 					'/link',
-					$perl_libs[0]);
+					$perl_libs[0]
+				);
 				my $compile_output = `@cmd 2>&1`;
 				-f $exe || die "Failed to build Perl test:\n$compile_output";
 
@@ -703,12 +723,16 @@ sub mkvcbuild
 				}
 			}
 		}
-		if (Solution::IsNewer(
+		if (
+			Solution::IsNewer(
 				'src/pl/plperl/perlchunks.h',
-				'src/pl/plperl/plc_perlboot.pl')
+				'src/pl/plperl/plc_perlboot.pl'
+			)
 			|| Solution::IsNewer(
 				'src/pl/plperl/perlchunks.h',
-				'src/pl/plperl/plc_trusted.pl'))
+				'src/pl/plperl/plc_trusted.pl'
+			)
+		  )
 		{
 			print 'Building src/pl/plperl/perlchunks.h ...' . "\n";
 			my $basedir = getcwd;
@@ -727,9 +751,12 @@ sub mkvcbuild
 				die 'Failed to create perlchunks.h' . "\n";
 			}
 		}
-		if (Solution::IsNewer(
+		if (
+			Solution::IsNewer(
 				'src/pl/plperl/plperl_opmask.h',
-				'src/pl/plperl/plperl_opmask.pl'))
+				'src/pl/plperl/plperl_opmask.pl'
+			)
+		  )
 		{
 			print 'Building src/pl/plperl/plperl_opmask.h ...' . "\n";
 			my $basedir = getcwd;
@@ -751,10 +778,12 @@ sub mkvcbuild
 		my $hstore_plperl = AddTransformModule(
 			'hstore_plperl', 'contrib/hstore_plperl',
 			'plperl',        'src/pl/plperl',
-			'hstore',        'contrib/hstore');
+			'hstore',        'contrib/hstore'
+		);
 		my $jsonb_plperl = AddTransformModule(
 			'jsonb_plperl', 'contrib/jsonb_plperl',
-			'plperl',       'src/pl/plperl');
+			'plperl',       'src/pl/plperl'
+		);
 
 		foreach my $f (@perl_embed_ccflags)
 		{
@@ -1015,7 +1044,8 @@ sub AdjustContribProj
 		$proj,                    $contrib_defines,
 		\@contrib_uselibpq,       \@contrib_uselibpgport,
 		\@contrib_uselibpgcommon, $contrib_extralibs,
-		$contrib_extrasource,     $contrib_extraincludes);
+		$contrib_extrasource,     $contrib_extraincludes
+	);
 }
 
 sub AdjustFrontendProj
@@ -1025,7 +1055,8 @@ sub AdjustFrontendProj
 		$proj,                     $frontend_defines,
 		\@frontend_uselibpq,       \@frontend_uselibpgport,
 		\@frontend_uselibpgcommon, $frontend_extralibs,
-		$frontend_extrasource,     $frontend_extraincludes);
+		$frontend_extrasource,     $frontend_extraincludes
+	);
 }
 
 sub AdjustModule
diff --git a/src/tools/msvc/Project.pm b/src/tools/msvc/Project.pm
index 3e08ce9..6b7d710 100644
--- a/src/tools/msvc/Project.pm
+++ b/src/tools/msvc/Project.pm
@@ -16,7 +16,8 @@ sub _new
 	my $good_types = {
 		lib => 1,
 		exe => 1,
-		dll => 1, };
+		dll => 1,
+	};
 	confess("Bad project type: $type\n") unless exists $good_types->{$type};
 	my $self = {
 		name                  => $name,
@@ -32,7 +33,8 @@ sub _new
 		solution              => $solution,
 		disablewarnings       => '4018;4244;4273;4102;4090;4267',
 		disablelinkerwarnings => '',
-		platform              => $solution->{platform}, };
+		platform              => $solution->{platform},
+	};
 
 	bless($self, $classname);
 	return $self;
@@ -217,7 +219,8 @@ sub AddDir
 
 				if ($filter eq "LIBOBJS")
 				{
-					if (grep(/$p/, @main::pgportfiles, @main::pgcommonfiles)
+					if (
+						grep(/$p/, @main::pgportfiles, @main::pgcommonfiles)
 						== 1)
 					{
 						$p =~ s/\.c/\.o/;
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index 55566bf..61fd7dc 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -22,7 +22,8 @@ sub _new
 		VisualStudioVersion        => undef,
 		MinimumVisualStudioVersion => undef,
 		vcver                      => undef,
-		platform                   => undef, };
+		platform                   => undef,
+	};
 	bless($self, $classname);
 
 	$self->DeterminePlatform();
@@ -236,32 +237,40 @@ sub GenerateFiles
 		close($i);
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			"src/include/pg_config_ext.h",
-			"src/include/pg_config_ext.h.win32"))
+			"src/include/pg_config_ext.h.win32"
+		)
+	  )
 	{
 		print "Copying pg_config_ext.h...\n";
 		copyFile(
 			"src/include/pg_config_ext.h.win32",
-			"src/include/pg_config_ext.h");
+			"src/include/pg_config_ext.h"
+		);
 	}
 
 	$self->GenerateDefFile(
 		"src/interfaces/libpq/libpqdll.def",
 		"src/interfaces/libpq/exports.txt",
-		"LIBPQ");
+		"LIBPQ"
+	);
 	$self->GenerateDefFile(
 		"src/interfaces/ecpg/ecpglib/ecpglib.def",
 		"src/interfaces/ecpg/ecpglib/exports.txt",
-		"LIBECPG");
+		"LIBECPG"
+	);
 	$self->GenerateDefFile(
 		"src/interfaces/ecpg/compatlib/compatlib.def",
 		"src/interfaces/ecpg/compatlib/exports.txt",
-		"LIBECPG_COMPAT");
+		"LIBECPG_COMPAT"
+	);
 	$self->GenerateDefFile(
 		"src/interfaces/ecpg/pgtypeslib/pgtypeslib.def",
 		"src/interfaces/ecpg/pgtypeslib/exports.txt",
-		"LIBPGTYPES");
+		"LIBPGTYPES"
+	);
 
 	chdir('src/backend/utils');
 	my $pg_language_dat = '../../../src/include/catalog/pg_language.dat';
@@ -281,43 +290,60 @@ sub GenerateFiles
 	}
 	chdir('../../..');
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/include/utils/fmgroids.h',
-			'src/backend/utils/fmgroids.h'))
+			'src/backend/utils/fmgroids.h'
+		)
+	  )
 	{
 		copyFile('src/backend/utils/fmgroids.h',
 			'src/include/utils/fmgroids.h');
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/include/utils/fmgrprotos.h',
-			'src/backend/utils/fmgrprotos.h'))
+			'src/backend/utils/fmgrprotos.h'
+		)
+	  )
 	{
 		copyFile(
 			'src/backend/utils/fmgrprotos.h',
-			'src/include/utils/fmgrprotos.h');
+			'src/include/utils/fmgrprotos.h'
+		);
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/include/storage/lwlocknames.h',
-			'src/backend/storage/lmgr/lwlocknames.txt'))
+			'src/backend/storage/lmgr/lwlocknames.txt'
+		)
+	  )
 	{
 		print "Generating lwlocknames.c and lwlocknames.h...\n";
 		chdir('src/backend/storage/lmgr');
 		system('perl generate-lwlocknames.pl lwlocknames.txt');
 		chdir('../../../..');
 	}
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/include/storage/lwlocknames.h',
-			'src/backend/storage/lmgr/lwlocknames.h'))
+			'src/backend/storage/lmgr/lwlocknames.h'
+		)
+	  )
 	{
 		copyFile(
 			'src/backend/storage/lmgr/lwlocknames.h',
-			'src/include/storage/lwlocknames.h');
+			'src/include/storage/lwlocknames.h'
+		);
 	}
 
-	if (IsNewer(
-			'src/include/dynloader.h', 'src/backend/port/dynloader/win32.h'))
+	if (
+		IsNewer(
+			'src/include/dynloader.h', 'src/backend/port/dynloader/win32.h'
+		)
+	  )
 	{
 		copyFile('src/backend/port/dynloader/win32.h',
 			'src/include/dynloader.h');
@@ -331,10 +357,13 @@ sub GenerateFiles
 		);
 	}
 
-	if ($self->{options}->{python}
+	if (
+		$self->{options}->{python}
 		&& IsNewer(
 			'src/pl/plpython/spiexceptions.h',
-			'src/backend/utils/errcodes.txt'))
+			'src/backend/utils/errcodes.txt'
+		)
+	  )
 	{
 		print "Generating spiexceptions.h...\n";
 		system(
@@ -342,9 +371,12 @@ sub GenerateFiles
 		);
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/include/utils/errcodes.h',
-			'src/backend/utils/errcodes.txt'))
+			'src/backend/utils/errcodes.txt'
+		)
+	  )
 	{
 		print "Generating errcodes.h...\n";
 		system(
@@ -354,9 +386,12 @@ sub GenerateFiles
 			'src/include/utils/errcodes.h');
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/pl/plpgsql/src/plerrcodes.h',
-			'src/backend/utils/errcodes.txt'))
+			'src/backend/utils/errcodes.txt'
+		)
+	  )
 	{
 		print "Generating plerrcodes.h...\n";
 		system(
@@ -364,9 +399,12 @@ sub GenerateFiles
 		);
 	}
 
-	if ($self->{options}->{tcl}
+	if (
+		$self->{options}->{tcl}
 		&& IsNewer(
-			'src/pl/tcl/pltclerrcodes.h', 'src/backend/utils/errcodes.txt'))
+			'src/pl/tcl/pltclerrcodes.h', 'src/backend/utils/errcodes.txt'
+		)
+	  )
 	{
 		print "Generating pltclerrcodes.h...\n";
 		system(
@@ -374,9 +412,12 @@ sub GenerateFiles
 		);
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/backend/utils/sort/qsort_tuple.c',
-			'src/backend/utils/sort/gen_qsort_tuple.pl'))
+			'src/backend/utils/sort/gen_qsort_tuple.pl'
+		)
+	  )
 	{
 		print "Generating qsort_tuple.c...\n";
 		system(
@@ -384,9 +425,12 @@ sub GenerateFiles
 		);
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/interfaces/libpq/libpq.rc',
-			'src/interfaces/libpq/libpq.rc.in'))
+			'src/interfaces/libpq/libpq.rc.in'
+		)
+	  )
 	{
 		print "Generating libpq.rc...\n";
 		my ($sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst) =
@@ -413,9 +457,12 @@ sub GenerateFiles
 		chdir('../../..');
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/interfaces/ecpg/preproc/preproc.y',
-			'src/backend/parser/gram.y'))
+			'src/backend/parser/gram.y'
+		)
+	  )
 	{
 		print "Generating preproc.y...\n";
 		chdir('src/interfaces/ecpg/preproc');
@@ -423,9 +470,12 @@ sub GenerateFiles
 		chdir('../../../..');
 	}
 
-	if (IsNewer(
+	if (
+		IsNewer(
 			'src/interfaces/ecpg/include/ecpg_config.h',
-			'src/interfaces/ecpg/include/ecpg_config.h.in'))
+			'src/interfaces/ecpg/include/ecpg_config.h.in'
+		)
+	  )
 	{
 		print "Generating ecpg_config.h...\n";
 		open(my $o, '>', 'src/interfaces/ecpg/include/ecpg_config.h')
@@ -477,9 +527,12 @@ EOF
 	foreach my $bki (@bki_srcs, @bki_data)
 	{
 		next if $bki eq "";
-		if (IsNewer(
+		if (
+			IsNewer(
 				'src/backend/catalog/bki-stamp',
-				"src/include/catalog/$bki"))
+				"src/include/catalog/$bki"
+			)
+		  )
 		{
 			$need_genbki = 1;
 			last;
@@ -510,11 +563,13 @@ EOF
 		{
 			copyFile(
 				"src/backend/catalog/$def_header",
-				"src/include/catalog/$def_header");
+				"src/include/catalog/$def_header"
+			);
 		}
 		copyFile(
 			'src/backend/catalog/schemapg.h',
-			'src/include/catalog/schemapg.h');
+			'src/include/catalog/schemapg.h'
+		);
 	}
 
 	open(my $o, '>', "doc/src/sgml/version.sgml")
diff --git a/src/tools/msvc/VCBuildProject.pm b/src/tools/msvc/VCBuildProject.pm
index d3a03c5..d0b9db5 100644
--- a/src/tools/msvc/VCBuildProject.pm
+++ b/src/tools/msvc/VCBuildProject.pm
@@ -35,19 +35,25 @@ EOF
 
 	$self->WriteConfiguration(
 		$f, 'Debug',
-		{   defs     => "_DEBUG;DEBUG=1",
+		{
+			defs     => "_DEBUG;DEBUG=1",
 			wholeopt => 0,
 			opt      => 0,
 			strpool  => 'false',
-			runtime  => 3 });
+			runtime  => 3
+		}
+	);
 	$self->WriteConfiguration(
 		$f,
 		'Release',
-		{   defs     => "",
+		{
+			defs     => "",
 			wholeopt => 0,
 			opt      => 3,
 			strpool  => 'true',
-			runtime  => 2 });
+			runtime  => 2
+		}
+	);
 	print $f <<EOF;
  </Configurations>
 EOF
@@ -73,7 +79,8 @@ EOF
 		# we're done with.
 		while ($#dirstack >= 0)
 		{
-			if (join('/', @dirstack) eq
+			if (
+				join('/', @dirstack) eq
 				substr($dir, 0, length(join('/', @dirstack))))
 			{
 				last if (length($dir) == length(join('/', @dirstack)));
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 3a88638..d44edf7 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -84,7 +84,8 @@ my %command = (
 	BINCHECK       => \&bincheck,
 	RECOVERYCHECK  => \&recoverycheck,
 	UPGRADECHECK   => \&upgradecheck,
-	TAPTEST        => \&taptest,);
+	TAPTEST        => \&taptest,
+);
 
 my $proc = $command{$what};
 
@@ -106,7 +107,8 @@ sub installcheck
 		"--schedule=${schedule}_schedule",
 		"--max-concurrent-tests=20",
 		"--encoding=SQL_ASCII",
-		"--no-locale");
+		"--no-locale"
+	);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	my $status = $? >> 8;
@@ -126,7 +128,8 @@ sub check
 		"--max-concurrent-tests=20",
 		"--encoding=SQL_ASCII",
 		"--no-locale",
-		"--temp-instance=./tmp_check");
+		"--temp-instance=./tmp_check"
+	);
 	push(@args, $maxconn)     if $maxconn;
 	push(@args, $temp_config) if $temp_config;
 	system(@args);
@@ -152,7 +155,8 @@ sub ecpgcheck
 		"--schedule=${schedule}_schedule",
 		"--encoding=SQL_ASCII",
 		"--no-locale",
-		"--temp-instance=./tmp_chk");
+		"--temp-instance=./tmp_chk"
+	);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	$status = $? >> 8;
@@ -168,7 +172,8 @@ sub isolationcheck
 		"../../../$Config/pg_isolation_regress/pg_isolation_regress",
 		"--bindir=../../../$Config/psql",
 		"--inputdir=.",
-		"--schedule=./isolation_schedule");
+		"--schedule=./isolation_schedule"
+	);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	my $status = $? >> 8;
@@ -257,14 +262,16 @@ sub mangle_plpython3
 	foreach my $test (@$tests)
 	{
 		local $/ = undef;
-		foreach my $dir ('sql','expected')
+		foreach my $dir ('sql', 'expected')
 		{
 			my $extension = ($dir eq 'sql' ? 'sql' : 'out');
 
-			my @files = glob("$dir/$test.$extension $dir/${test}_[0-9].$extension");
+			my @files =
+			  glob("$dir/$test.$extension $dir/${test}_[0-9].$extension");
 			foreach my $file (@files)
 			{
-				open(my $handle, '<', $file) || die "test file $file not found";
+				open(my $handle, '<', $file)
+				  || die "test file $file not found";
 				my $contents = <$handle>;
 				close($handle);
 				do
@@ -279,16 +286,18 @@ sub mangle_plpython3
 					s/LANGUAGE plpython2?u/LANGUAGE plpython3u/g;
 					s/EXTENSION ([^ ]*_)*plpython2?u/EXTENSION $1plpython3u/g;
 					s/installing required extension "plpython2u"/installing required extension "plpython3u"/g;
-				} for ($contents);
+				  }
+				  for ($contents);
 				my $base = basename $file;
-				open($handle, '>', "$dir/python3/$base") ||
-				  die "opening python 3 file for $file";
+				open($handle, '>', "$dir/python3/$base")
+				  || die "opening python 3 file for $file";
 				print $handle $contents;
 				close($handle);
 			}
 		}
 	}
-	do { s!^!python3/!; } foreach(@$tests);
+	do { s!^!python3/!; }
+	  foreach (@$tests);
 	return @$tests;
 }
 
@@ -314,8 +323,9 @@ sub plcheck
 		}
 		if ($lang eq 'plpython')
 		{
-			next unless -d "$topdir/$Config/plpython2" ||
-				-d "$topdir/$Config/plpython3";
+			next
+			  unless -d "$topdir/$Config/plpython2"
+			  || -d "$topdir/$Config/plpython3";
 			$lang = 'plpythonu';
 		}
 		else
@@ -326,7 +336,7 @@ sub plcheck
 		chdir $dir;
 		my @tests = fetchTests();
 		@tests = mangle_plpython3(\@tests)
-			if $lang eq 'plpythonu' && -d "$topdir/$Config/plpython3";
+		  if $lang eq 'plpythonu' && -d "$topdir/$Config/plpython3";
 		if ($lang eq 'plperl')
 		{
 
@@ -352,7 +362,8 @@ sub plcheck
 		my @args = (
 			"$topdir/$Config/pg_regress/pg_regress",
 			"--bindir=$topdir/$Config/psql",
-			"--dbname=pl_regression", @lang_args, @tests);
+			"--dbname=pl_regression", @lang_args, @tests
+		);
 		system(@args);
 		my $status = $? >> 8;
 		exit $status if $status;
@@ -380,7 +391,7 @@ sub subdircheck
 	# Special processing for python transform modules, see their respective
 	# Makefiles for more details regarding Python-version specific
 	# dependencies.
-	if ( $module =~ /_plpython$/ )
+	if ($module =~ /_plpython$/)
 	{
 		die "Python not enabled in configuration"
 		  if !defined($config->{python});
@@ -404,8 +415,9 @@ sub subdircheck
 	my @args = (
 		"$topdir/$Config/pg_regress/pg_regress",
 		"--bindir=${topdir}/${Config}/psql",
-		"--dbname=contrib_regression", @opts, @tests);
-	print join(' ',@args),"\n";
+		"--dbname=contrib_regression", @opts, @tests
+	);
+	print join(' ', @args), "\n";
 	system(@args);
 	chdir "..";
 }
@@ -417,11 +429,11 @@ sub contribcheck
 	foreach my $module (glob("*"))
 	{
 		# these configuration-based exclusions must match Install.pm
-		next if ($module eq "uuid-ossp"     && !defined($config->{uuid}));
-		next if ($module eq "sslinfo"       && !defined($config->{openssl}));
-		next if ($module eq "xml2"          && !defined($config->{xml}));
-		next if ($module =~ /_plperl$/      && !defined($config->{perl}));
-		next if ($module =~ /_plpython$/    && !defined($config->{python}));
+		next if ($module eq "uuid-ossp"  && !defined($config->{uuid}));
+		next if ($module eq "sslinfo"    && !defined($config->{openssl}));
+		next if ($module eq "xml2"       && !defined($config->{xml}));
+		next if ($module =~ /_plperl$/   && !defined($config->{perl}));
+		next if ($module =~ /_plpython$/ && !defined($config->{python}));
 		next if ($module eq "sepgsql");
 
 		subdircheck($module);
@@ -460,7 +472,9 @@ sub standard_initdb
 	return (
 		system('initdb', '-N') == 0 and system(
 			"$topdir/$Config/pg_regress/pg_regress", '--config-auth',
-			$ENV{PGDATA}) == 0);
+			$ENV{PGDATA}
+		) == 0
+	);
 }
 
 # This is similar to appendShellString().  Perl system(@args) bypasses
@@ -553,7 +567,8 @@ sub upgradecheck
 	print "\nRunning pg_upgrade\n\n";
 	@args = (
 		'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
-		$bindir,      '-B', $bindir);
+		$bindir,      '-B', $bindir
+	);
 	system(@args) == 0 or exit 1;
 	print "\nStarting new cluster\n\n";
 	@args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
diff --git a/src/tools/pgindent/perltidyrc b/src/tools/pgindent/perltidyrc
index 29baef7..91b76fc 100644
--- a/src/tools/pgindent/perltidyrc
+++ b/src/tools/pgindent/perltidyrc
@@ -11,5 +11,5 @@
 --opening-brace-on-new-line
 --output-line-ending=unix
 --paren-tightness=2
---vertical-tightness=2
---vertical-tightness-closing=2
+# --vertical-tightness=2
+# --vertical-tightness-closing=2
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index ce0f43f..8d4684d 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -28,7 +28,8 @@ my %options = (
 	"code-base=s"        => \$code_base,
 	"excludes=s"         => \$excludes,
 	"indent=s"           => \$indent,
-	"build"              => \$build,);
+	"build"              => \$build,
+);
 GetOptions(%options) || die "bad command line argument\n";
 
 run_build($code_base) if ($build);
@@ -389,7 +390,8 @@ sub build_clean
 
 # get the list of files under code base, if it's set
 File::Find::find(
-	{   wanted => sub {
+	{
+		wanted => sub {
 			my ($dev, $ino, $mode, $nlink, $uid, $gid);
 			(($dev, $ino, $mode, $nlink, $uid, $gid) = lstat($_))
 			  && -f _
@@ -397,7 +399,8 @@ File::Find::find(
 			  && push(@files, $File::Find::name);
 		}
 	},
-	$code_base) if $code_base;
+	$code_base
+) if $code_base;
 
 process_exclude();
 
diff --git a/src/tools/version_stamp.pl b/src/tools/version_stamp.pl
index 392fd4a..67effca 100755
--- a/src/tools/version_stamp.pl
+++ b/src/tools/version_stamp.pl
@@ -83,7 +83,8 @@ my $aconfver = "";
 open(my $fh, '<', "configure.in") || die "could not read configure.in: $!\n";
 while (<$fh>)
 {
-	if (m/^m4_if\(m4_defn\(\[m4_PACKAGE_VERSION\]\), \[(.*)\], \[\], \[m4_fatal/
+	if (
+		m/^m4_if\(m4_defn\(\[m4_PACKAGE_VERSION\]\), \[(.*)\], \[\], \[m4_fatal/
 	  )
 	{
 		$aconfver = $1;
diff --git a/src/tools/win32tzlist.pl b/src/tools/win32tzlist.pl
index 4610d43..0fb561b 100755
--- a/src/tools/win32tzlist.pl
+++ b/src/tools/win32tzlist.pl
@@ -47,9 +47,11 @@ foreach my $keyname (@subkeys)
 	die "Incomplete timezone data for $keyname!\n"
 	  unless ($vals{Std} && $vals{Dlt} && $vals{Display});
 	push @system_zones,
-	  { 'std'     => $vals{Std}->[2],
+	  {
+		'std'     => $vals{Std}->[2],
 		'dlt'     => $vals{Dlt}->[2],
-		'display' => clean_displayname($vals{Display}->[2]), };
+		'display' => clean_displayname($vals{Display}->[2]),
+	  };
 }
 
 $basekey->Close();
@@ -75,10 +77,12 @@ while ($pgtz =~
 	m/{\s+"([^"]+)",\s+"([^"]+)",\s+"([^"]+)",?\s+},\s+\/\*(.+?)\*\//gs)
 {
 	push @file_zones,
-	  { 'std'     => $1,
+	  {
+		'std'     => $1,
 		'dlt'     => $2,
 		'match'   => $3,
-		'display' => clean_displayname($4), };
+		'display' => clean_displayname($4),
+	  };
 }
 
 #
#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#3)
Re: perlcritic and perltidy

Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:

On 05/06/2018 11:53 AM, Tom Lane wrote:

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Essentially it adds some vertical whitespace to structures so that the
enclosing braces etc appear on their own lines. A very typical change
looks like this:

-         { code      => $code,
+         {
+           code      => $code,
            ucs       => $ucs,
            comment   => $rest,
            direction => $direction,
            f         => $in_file,
-           l         => $. };
+           l         => $.
+         };

Hm. I have no strong opinion about whether this looks better or not;
people who write more Perl than I do ought to weigh in.

However, I do want to note that we've chosen the shorter style for
the catalog .dat files, and that's enforced by reformat_dat_file.pl.
I'd be against changing that decision, because one of the goals for
the .dat file format was to minimize the risk of patches applying in
the wrong place. Near-content-free lines containing just "{" or "},"
would increase that risk by reducing the uniqueness of patch context
lines.

It's not essential for the .dat files to match the Perl layout we use
elsewhere, of course. But perhaps consistency is one factor to
consider here.

regards, tom lane

#5Stephen Frost
sfrost@snowman.net
In reply to: Tom Lane (#4)
Re: perlcritic and perltidy

Greetings,

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:

On 05/06/2018 11:53 AM, Tom Lane wrote:

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Essentially it adds some vertical whitespace to structures so that the
enclosing braces etc appear on their own lines. A very typical change
looks like this:

-         { code      => $code,
+         {
+           code      => $code,
            ucs       => $ucs,
            comment   => $rest,
            direction => $direction,
            f         => $in_file,
-           l         => $. };
+           l         => $.
+         };

Hm. I have no strong opinion about whether this looks better or not;
people who write more Perl than I do ought to weigh in.

I definitely prefer to have the braces on their own line- makes working
with the files a lot easier when you've got a lot of hashes
(particularly thinking about the hashes for the pg_dump regression
tests..). Having them on independent lines would have saved me quite a
few keystrokes when I reworked those tests.

However, I do want to note that we've chosen the shorter style for
the catalog .dat files, and that's enforced by reformat_dat_file.pl.
I'd be against changing that decision, because one of the goals for
the .dat file format was to minimize the risk of patches applying in
the wrong place. Near-content-free lines containing just "{" or "},"
would increase that risk by reducing the uniqueness of patch context
lines.

I can understand that concern, though I don't think it really applies as
much to other the other perl code.

Thanks!

Stephen

#6Michael Paquier
michael@paquier.xyz
In reply to: Stephen Frost (#5)
Re: perlcritic and perltidy

On Sun, May 06, 2018 at 01:46:28PM -0400, Stephen Frost wrote:

I definitely prefer to have the braces on their own line- makes working
with the files a lot easier when you've got a lot of hashes
(particularly thinking about the hashes for the pg_dump regression
tests..). Having them on independent lines would have saved me quite a
few keystrokes when I reworked those tests.

Agreed with Stephen's argument. Let's keep the braces on the same
line. I have also been annoyed a couple of times with the format which
adds a new line just for a brace.
--
Michael

#7Stephen Frost
sfrost@snowman.net
In reply to: Michael Paquier (#6)
Re: perlcritic and perltidy

Michael,

* Michael Paquier (michael@paquier.xyz) wrote:

On Sun, May 06, 2018 at 01:46:28PM -0400, Stephen Frost wrote:

I definitely prefer to have the braces on their own line- makes working
with the files a lot easier when you've got a lot of hashes
(particularly thinking about the hashes for the pg_dump regression
tests..). Having them on independent lines would have saved me quite a
few keystrokes when I reworked those tests.

Agreed with Stephen's argument. Let's keep the braces on the same
line. I have also been annoyed a couple of times with the format which
adds a new line just for a brace.

While I appreciate the support, I'm not sure that you're actually
agreeing with me.. I was arguing that braces should be on their own
line and therefore there would be a new line for the brace.
Specifically, when moving lines between hashes, it's annoying to have to
also worry about if the line being copied/moved has braces at the end or
not- much easier if they don't and the braces are on their own line.

Thanks!

Stephen

#8Michael Paquier
michael@paquier.xyz
In reply to: Stephen Frost (#7)
Re: perlcritic and perltidy

On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:

While I appreciate the support, I'm not sure that you're actually
agreeing with me.. I was arguing that braces should be on their own
line and therefore there would be a new line for the brace.
Specifically, when moving lines between hashes, it's annoying to have to
also worry about if the line being copied/moved has braces at the end or
not- much easier if they don't and the braces are on their own line.

I should have read that twice. Yes we are not on the same line. Even
if a brace is on a different line, per your argument it would still be
nicer to add a comma at the end of each last element of a hash or an
array, which is what you have done in the tests of pg_dump, but not
something that the proposed patch does consistently. If the formatting
is automated, the way chosen does not matter much, but the extra last
comma should be consistently present as well?
--
Michael

#9Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Andrew Dunstan (#3)
Re: perlcritic and perltidy

On 5/6/18 12:13, Andrew Dunstan wrote:

Essentially it adds some vertical whitespace to structures so that the
enclosing braces etc appear on their own lines. A very typical change
looks like this:

-         { code      => $code,
+         {
+           code      => $code,
            ucs       => $ucs,
            comment   => $rest,
            direction => $direction,
            f         => $in_file,
-           l         => $. };
+           l         => $.
+         };

The proposed changes certainly match the style we use in C better, which
is what some of the other settings were also informed by. So I'm in
favor of the changes -- for braces.

For parentheses, I'm not sure whether this is a good idea:

diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 2971e64..0d3184c 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -40,8 +40,11 @@ while (<$in>)
 	next if (($code & 0xFF) < 0xA1);
 	next
 	  if (
-		!(     $code >= 0xA100 && $code <= 0xA9FF
-			|| $code >= 0xB000 && $code <= 0xF7FF));
+		!(
+			   $code >= 0xA100 && $code <= 0xA9FF
+			|| $code >= 0xB000 && $code <= 0xF7FF
+		)
+	  );

next if ($code >= 0xA2A1 && $code <= 0xA2B0);
next if ($code >= 0xA2E3 && $code <= 0xA2E4);

In a manual C-style indentation, this would just be

next if (!($code >= 0xA100 && $code <= 0xA9FF
|| $code >= 0xB000 && $code <= 0xF7FF));

but somehow the indent runs have managed to spread this compact
expression over the entire screen.

Can we have separate settings for braces and parentheses?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#10Stephen Frost
sfrost@snowman.net
In reply to: Michael Paquier (#8)
Re: perlcritic and perltidy

Greetings,

* Michael Paquier (michael@paquier.xyz) wrote:

On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:

While I appreciate the support, I'm not sure that you're actually
agreeing with me.. I was arguing that braces should be on their own
line and therefore there would be a new line for the brace.
Specifically, when moving lines between hashes, it's annoying to have to
also worry about if the line being copied/moved has braces at the end or
not- much easier if they don't and the braces are on their own line.

I should have read that twice. Yes we are not on the same line. Even
if a brace is on a different line, per your argument it would still be
nicer to add a comma at the end of each last element of a hash or an
array, which is what you have done in the tests of pg_dump, but not
something that the proposed patch does consistently. If the formatting
is automated, the way chosen does not matter much, but the extra last
comma should be consistently present as well?

Yes, that would be nice as well, as you'd be able to move entries around
more easily that way.

Thanks!

Stephen

#11David Steele
david@pgmasters.net
In reply to: Stephen Frost (#10)
Re: perlcritic and perltidy

On 5/8/18 8:11 AM, Stephen Frost wrote:

Greetings,

* Michael Paquier (michael@paquier.xyz) wrote:

On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:

While I appreciate the support, I'm not sure that you're actually
agreeing with me.. I was arguing that braces should be on their own
line and therefore there would be a new line for the brace.
Specifically, when moving lines between hashes, it's annoying to have to
also worry about if the line being copied/moved has braces at the end or
not- much easier if they don't and the braces are on their own line.

I should have read that twice. Yes we are not on the same line. Even
if a brace is on a different line, per your argument it would still be
nicer to add a comma at the end of each last element of a hash or an
array, which is what you have done in the tests of pg_dump, but not
something that the proposed patch does consistently. If the formatting
is automated, the way chosen does not matter much, but the extra last
comma should be consistently present as well?

Yes, that would be nice as well, as you'd be able to move entries around
more easily that way.

I'm a fan of the final comma as it makes diffs less noisy.

Regards,
--
-David
david@pgmasters.net

#12Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: David Steele (#11)
Re: perlcritic and perltidy

On 05/08/2018 08:31 AM, David Steele wrote:

On 5/8/18 8:11 AM, Stephen Frost wrote:

Greetings,

* Michael Paquier (michael@paquier.xyz) wrote:

On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:

While I appreciate the support, I'm not sure that you're actually
agreeing with me.. I was arguing that braces should be on their own
line and therefore there would be a new line for the brace.
Specifically, when moving lines between hashes, it's annoying to have to
also worry about if the line being copied/moved has braces at the end or
not- much easier if they don't and the braces are on their own line.

I should have read that twice. Yes we are not on the same line. Even
if a brace is on a different line, per your argument it would still be
nicer to add a comma at the end of each last element of a hash or an
array, which is what you have done in the tests of pg_dump, but not
something that the proposed patch does consistently. If the formatting
is automated, the way chosen does not matter much, but the extra last
comma should be consistently present as well?

Yes, that would be nice as well, as you'd be able to move entries around
more easily that way.

I'm a fan of the final comma as it makes diffs less noisy.

Me too.

AFAICT there is no perltidy setting to add them where they are missing.
There is a perlcritic setting to detect them in lists, however. Here is
the output:

andrew@emma:pg_head (master)$ {�������� find . -type f -a \( -name
'*.pl' -o -name '*.pm' \) -print;�������� find . -type f -perm -100
-exec file {} \; -print��������������� | egrep -i
':.*perl[0-9]*\>'��������������� | cut -d: -f1;���� }���� | sort -u� |
xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas
./src/backend/catalog/Catalog.pm: List declaration without trailing
comma at line 30, column 23.� See page 17 of PBP.� (Severity: 1)
./src/backend/catalog/genbki.pl: List declaration without trailing comma
at line 242, column 19.� See page 17 of PBP.� (Severity: 1)
./src/backend/catalog/genbki.pl: List declaration without trailing comma
at line 627, column 20.� See page 17 of PBP.� (Severity: 1)
./src/backend/utils/mb/Unicode/UCS_to_most.pl: List declaration without
trailing comma at line 23, column 16.� See page 17 of PBP.� (Severity: 1)
./src/backend/utils/mb/Unicode/UCS_to_SJIS.pl: List declaration without
trailing comma at line 21, column 19.� See page 17 of PBP.� (Severity: 1)
./src/interfaces/ecpg/preproc/check_rules.pl: List declaration without
trailing comma at line 38, column 20.� See page 17 of PBP.� (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1468, column 14.� See page 17 of PBP.� (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1657, column 16.� See page 17 of PBP.� (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1697, column 12.� See page 17 of PBP.� (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 119, column 20.� See page 17 of PBP.� (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 125, column 20.� See page 17 of PBP.� (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 131, column 20.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 22, column 28.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 81, column 17.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 653, column 14.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 709, column 15.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 43, column 24.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 55, column 29.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 59, column 31.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 75, column 25.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 623, column 15.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 102, column 13.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 121, column 13.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 147, column 17.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 167, column 13.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 352, column 14.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 404, column 13.� See page 17 of PBP.� (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 554, column 10.� See page 17 of PBP.� (Severity: 1)

I'll take a look those. There doesn't seem to be a reliable one to
detect them in things other than lists (like anonymous hash and array
contructors using {} and []). There is one in the Pulp collection of
policies, but it doesn't seem to work very well.

But this is all a bit away from the original discussion.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#13Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Peter Eisentraut (#9)
1 attachment(s)
Re: perlcritic and perltidy

On 05/08/2018 07:53 AM, Peter Eisentraut wrote:

On 5/6/18 12:13, Andrew Dunstan wrote:

Essentially it adds some vertical whitespace to structures so that the
enclosing braces etc appear on their own lines. A very typical change
looks like this:

-         { code      => $code,
+         {
+           code      => $code,
            ucs       => $ucs,
            comment   => $rest,
            direction => $direction,
            f         => $in_file,
-           l         => $. };
+           l         => $.
+         };

The proposed changes certainly match the style we use in C better, which
is what some of the other settings were also informed by. So I'm in
favor of the changes -- for braces.

For parentheses, I'm not sure whether this is a good idea:

diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 2971e64..0d3184c 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -40,8 +40,11 @@ while (<$in>)
next if (($code & 0xFF) < 0xA1);
next
if (
-		!(     $code >= 0xA100 && $code <= 0xA9FF
-			|| $code >= 0xB000 && $code <= 0xF7FF));
+		!(
+			   $code >= 0xA100 && $code <= 0xA9FF
+			|| $code >= 0xB000 && $code <= 0xF7FF
+		)
+	  );

next if ($code >= 0xA2A1 && $code <= 0xA2B0);
next if ($code >= 0xA2E3 && $code <= 0xA2E4);

In a manual C-style indentation, this would just be

next if (!($code >= 0xA100 && $code <= 0xA9FF
|| $code >= 0xB000 && $code <= 0xF7FF));

but somehow the indent runs have managed to spread this compact
expression over the entire screen.

Can we have separate settings for braces and parentheses?

Yes. there are separate settings for the three types of brackets. Here's
what happens if we restrict the vertical tightness settings to parentheses.

I think that's an unambiguous improvement.

Despite what the perltidy manual page says about needing to use
--line-up-parentheses with the vertical-tightness, I don't think we
should use it, I find the results fairly ugly.  Also, I note that
according to the docs the -pbp setting includes -vt=2 without including
-lp, so they don't seem terribly consistent here.

So in summary let's just go with

--paren-vertical-tightness=2
--paren-vertical-tightness-closing=2

cheers

andrew
 

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

paren-vt.patchtext/x-patch; name=paren-vt.patchDownload
diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index 3bc218d..c8dc956 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -95,10 +95,12 @@ sub ParseHeader
 		elsif (/^DECLARE_(UNIQUE_)?INDEX\(\s*(\w+),\s*(\d+),\s*(.+)\)/)
 		{
 			push @{ $catalog{indexing} },
-			  { is_unique => $1 ? 1 : 0,
+			  {
+				is_unique => $1 ? 1 : 0,
 				index_name => $2,
 				index_oid  => $3,
-				index_decl => $4 };
+				index_decl => $4
+			  };
 		}
 		elsif (/^CATALOG\((\w+),(\d+),(\w+)\)/)
 		{
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index 5fd5313..1cbc250 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -97,11 +97,13 @@ foreach my $row (@{ $catalog_data{pg_proc} })
 	next if $bki_values{prolang} ne $INTERNALlanguageId;
 
 	push @fmgr,
-	  { oid    => $bki_values{oid},
+	  {
+		oid    => $bki_values{oid},
 		strict => $bki_values{proisstrict},
 		retset => $bki_values{proretset},
 		nargs  => $bki_values{pronargs},
-		prosrc => $bki_values{prosrc}, };
+		prosrc => $bki_values{prosrc},
+	  };
 }
 
 # Emit headers for both files
diff --git a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
index 7d497c6..672d890 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
@@ -48,12 +48,14 @@ foreach my $i (@$cp950txt)
 		&& $code <= 0xf9dc)
 	{
 		push @$all,
-		  { code      => $code,
+		  {
+			code      => $code,
 			ucs       => $ucs,
 			comment   => $i->{comment},
 			direction => BOTH,
 			f         => $i->{f},
-			l         => $i->{l} };
+			l         => $i->{l}
+		  };
 	}
 }
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 2971e64..00c1f33 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -70,11 +70,13 @@ while (<$in>)
 	}
 
 	push @mapping,
-	  { ucs       => $ucs,
+	  {
+		ucs       => $ucs,
 		code      => $code,
 		direction => BOTH,
 		f         => $in_file,
-		l         => $. };
+		l         => $.
+	  };
 }
 close($in);
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
index 1c1152e..9ad7dd0 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
@@ -33,13 +33,15 @@ while (my $line = <$in>)
 		my $ucs2 = hex($u2);
 
 		push @all,
-		  { direction  => BOTH,
+		  {
+			direction  => BOTH,
 			ucs        => $ucs1,
 			ucs_second => $ucs2,
 			code       => $code,
 			comment    => $rest,
 			f          => $in_file,
-			l          => $. };
+			l          => $.
+		  };
 	}
 	elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
 	{
@@ -52,12 +54,14 @@ while (my $line = <$in>)
 		next if ($code < 0x80 && $ucs < 0x80);
 
 		push @all,
-		  { direction => BOTH,
+		  {
+			direction => BOTH,
 			ucs       => $ucs,
 			code      => $code,
 			comment   => $rest,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
index 0ac1f17..784e9e4 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JP.pl
@@ -115,352 +115,524 @@ foreach my $i (@mapping)
 }
 
 push @mapping, (
-	{   direction => BOTH,
+	{
+		direction => BOTH,
 		ucs       => 0x4efc,
 		code      => 0x8ff4af,
-		comment   => '# CJK(4EFC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(4EFC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x50f4,
 		code      => 0x8ff4b0,
-		comment   => '# CJK(50F4)' },
-	{   direction => BOTH,
+		comment   => '# CJK(50F4)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x51EC,
 		code      => 0x8ff4b1,
-		comment   => '# CJK(51EC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(51EC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5307,
 		code      => 0x8ff4b2,
-		comment   => '# CJK(5307)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5307)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5324,
 		code      => 0x8ff4b3,
-		comment   => '# CJK(5324)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5324)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x548A,
 		code      => 0x8ff4b5,
-		comment   => '# CJK(548A)' },
-	{   direction => BOTH,
+		comment   => '# CJK(548A)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5759,
 		code      => 0x8ff4b6,
-		comment   => '# CJK(5759)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5759)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x589E,
 		code      => 0x8ff4b9,
-		comment   => '# CJK(589E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(589E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5BEC,
 		code      => 0x8ff4ba,
-		comment   => '# CJK(5BEC)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5BEC)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5CF5,
 		code      => 0x8ff4bb,
-		comment   => '# CJK(5CF5)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5CF5)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5D53,
 		code      => 0x8ff4bc,
-		comment   => '# CJK(5D53)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5D53)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x5FB7,
 		code      => 0x8ff4be,
-		comment   => '# CJK(5FB7)' },
-	{   direction => BOTH,
+		comment   => '# CJK(5FB7)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6085,
 		code      => 0x8ff4bf,
-		comment   => '# CJK(6085)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6085)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6120,
 		code      => 0x8ff4c0,
-		comment   => '# CJK(6120)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6120)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x654E,
 		code      => 0x8ff4c1,
-		comment   => '# CJK(654E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(654E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x663B,
 		code      => 0x8ff4c2,
-		comment   => '# CJK(663B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(663B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6665,
 		code      => 0x8ff4c3,
-		comment   => '# CJK(6665)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6665)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6801,
 		code      => 0x8ff4c6,
-		comment   => '# CJK(6801)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6801)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6A6B,
 		code      => 0x8ff4c9,
-		comment   => '# CJK(6A6B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6A6B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6AE2,
 		code      => 0x8ff4ca,
-		comment   => '# CJK(6AE2)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6AE2)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6DF2,
 		code      => 0x8ff4cc,
-		comment   => '# CJK(6DF2)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6DF2)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x6DF8,
 		code      => 0x8ff4cb,
-		comment   => '# CJK(6DF8)' },
-	{   direction => BOTH,
+		comment   => '# CJK(6DF8)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7028,
 		code      => 0x8ff4cd,
-		comment   => '# CJK(7028)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7028)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x70BB,
 		code      => 0x8ff4ae,
-		comment   => '# CJK(70BB)' },
-	{   direction => BOTH,
+		comment   => '# CJK(70BB)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7501,
 		code      => 0x8ff4d0,
-		comment   => '# CJK(7501)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7501)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7682,
 		code      => 0x8ff4d1,
-		comment   => '# CJK(7682)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7682)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x769E,
 		code      => 0x8ff4d2,
-		comment   => '# CJK(769E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(769E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7930,
 		code      => 0x8ff4d4,
-		comment   => '# CJK(7930)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7930)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7AE7,
 		code      => 0x8ff4d9,
-		comment   => '# CJK(7AE7)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7AE7)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7DA0,
 		code      => 0x8ff4dc,
-		comment   => '# CJK(7DA0)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7DA0)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x7DD6,
 		code      => 0x8ff4dd,
-		comment   => '# CJK(7DD6)' },
-	{   direction => BOTH,
+		comment   => '# CJK(7DD6)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8362,
 		code      => 0x8ff4df,
-		comment   => '# CJK(8362)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8362)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x85B0,
 		code      => 0x8ff4e1,
-		comment   => '# CJK(85B0)' },
-	{   direction => BOTH,
+		comment   => '# CJK(85B0)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8807,
 		code      => 0x8ff4e4,
-		comment   => '# CJK(8807)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8807)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8B7F,
 		code      => 0x8ff4e6,
-		comment   => '# CJK(8B7F)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8B7F)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8CF4,
 		code      => 0x8ff4e7,
-		comment   => '# CJK(8CF4)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8CF4)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x8D76,
 		code      => 0x8ff4e8,
-		comment   => '# CJK(8D76)' },
-	{   direction => BOTH,
+		comment   => '# CJK(8D76)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x90DE,
 		code      => 0x8ff4ec,
-		comment   => '# CJK(90DE)' },
-	{   direction => BOTH,
+		comment   => '# CJK(90DE)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9115,
 		code      => 0x8ff4ee,
-		comment   => '# CJK(9115)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9115)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9592,
 		code      => 0x8ff4f1,
-		comment   => '# CJK(9592)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9592)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x973B,
 		code      => 0x8ff4f4,
-		comment   => '# CJK(973B)' },
-	{   direction => BOTH,
+		comment   => '# CJK(973B)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x974D,
 		code      => 0x8ff4f5,
-		comment   => '# CJK(974D)' },
-	{   direction => BOTH,
+		comment   => '# CJK(974D)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9751,
 		code      => 0x8ff4f6,
-		comment   => '# CJK(9751)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9751)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x999E,
 		code      => 0x8ff4fa,
-		comment   => '# CJK(999E)' },
-	{   direction => BOTH,
+		comment   => '# CJK(999E)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9AD9,
 		code      => 0x8ff4fb,
-		comment   => '# CJK(9AD9)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9AD9)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9B72,
 		code      => 0x8ff4fc,
-		comment   => '# CJK(9B72)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9B72)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x9ED1,
 		code      => 0x8ff4fe,
-		comment   => '# CJK(9ED1)' },
-	{   direction => BOTH,
+		comment   => '# CJK(9ED1)'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xF929,
 		code      => 0x8ff4c5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F929' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F929'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xF9DC,
 		code      => 0x8ff4f2,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F9DC' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-F9DC'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA0E,
 		code      => 0x8ff4b4,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0E' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0E'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA0F,
 		code      => 0x8ff4b7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0F' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA0F'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA10,
 		code      => 0x8ff4b8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA10' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA10'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA11,
 		code      => 0x8ff4bd,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA11' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA11'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA12,
 		code      => 0x8ff4c4,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA12' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA12'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA13,
 		code      => 0x8ff4c7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA13' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA13'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA14,
 		code      => 0x8ff4c8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA14' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA14'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA15,
 		code      => 0x8ff4ce,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA15' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA15'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA16,
 		code      => 0x8ff4cf,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA16' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA16'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA17,
 		code      => 0x8ff4d3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA17' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA17'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA18,
 		code      => 0x8ff4d5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA18' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA18'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA19,
 		code      => 0x8ff4d6,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA19' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA19'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1A,
 		code      => 0x8ff4d7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1A' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1A'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1B,
 		code      => 0x8ff4d8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1B' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1B'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1C,
 		code      => 0x8ff4da,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1C' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1C'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1D,
 		code      => 0x8ff4db,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1D' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1D'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1E,
 		code      => 0x8ff4de,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1E' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1E'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA1F,
 		code      => 0x8ff4e0,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1F' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA1F'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA20,
 		code      => 0x8ff4e2,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA20' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA20'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA21,
 		code      => 0x8ff4e3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA21' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA21'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA22,
 		code      => 0x8ff4e5,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA22' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA22'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA23,
 		code      => 0x8ff4e9,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA23' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA23'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA24,
 		code      => 0x8ff4ea,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA24' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA24'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA25,
 		code      => 0x8ff4eb,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA25' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA25'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA26,
 		code      => 0x8ff4ed,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA26' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA26'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA27,
 		code      => 0x8ff4ef,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA27' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA27'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA28,
 		code      => 0x8ff4f0,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA28' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA28'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA29,
 		code      => 0x8ff4f3,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA29' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA29'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2A,
 		code      => 0x8ff4f7,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2A' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2A'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2B,
 		code      => 0x8ff4f8,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2B' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2B'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2C,
 		code      => 0x8ff4f9,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2C' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2C'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFA2D,
 		code      => 0x8ff4fd,
-		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2D' },
-	{   direction => BOTH,
+		comment   => '# CJK COMPATIBILITY IDEOGRAPH-FA2D'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFF07,
 		code      => 0x8ff4a9,
-		comment   => '# FULLWIDTH APOSTROPHE' },
-	{   direction => BOTH,
+		comment   => '# FULLWIDTH APOSTROPHE'
+	},
+	{
+		direction => BOTH,
 		ucs       => 0xFFE4,
 		code      => 0x8fa2c3,
-		comment   => '# FULLWIDTH BROKEN BAR' },
+		comment   => '# FULLWIDTH BROKEN BAR'
+	},
 
 	# additional conversions for EUC_JP -> UTF-8 conversion
-	{   direction => TO_UNICODE,
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x2116,
 		code      => 0x8ff4ac,
-		comment   => '# NUMERO SIGN' },
-	{   direction => TO_UNICODE,
+		comment   => '# NUMERO SIGN'
+	},
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x2121,
 		code      => 0x8ff4ad,
-		comment   => '# TELEPHONE SIGN' },
-	{   direction => TO_UNICODE,
+		comment   => '# TELEPHONE SIGN'
+	},
+	{
+		direction => TO_UNICODE,
 		ucs       => 0x3231,
 		code      => 0x8ff4ab,
-		comment   => '# PARENTHESIZED IDEOGRAPH STOCK' });
+		comment   => '# PARENTHESIZED IDEOGRAPH STOCK'
+	});
 
 print_conversion_tables($this_script, "EUC_JP", \@mapping);
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
index 4d6a3ca..4e4e3fd 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
@@ -32,23 +32,29 @@ foreach my $i (@$mapping)
 
 # Some extra characters that are not in KSX1001.TXT
 push @$mapping,
-  ( {   direction => BOTH,
+  ( {
+		direction => BOTH,
 		ucs       => 0x20AC,
 		code      => 0xa2e6,
 		comment   => '# EURO SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x00AE,
 		code      => 0xa2e7,
 		comment   => '# REGISTERED SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x327E,
 		code      => 0xa2e8,
 		comment   => '# CIRCLED HANGUL IEUNG U',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	});
 
 print_conversion_tables($this_script, "EUC_KR", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
index 89f3cd7..98d4156d 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
@@ -53,12 +53,14 @@ foreach my $i (@$mapping)
 	if ($origcode >= 0x12121 && $origcode <= 0x20000)
 	{
 		push @extras,
-		  { ucs       => $i->{ucs},
+		  {
+			ucs       => $i->{ucs},
 			code      => ($i->{code} + 0x8ea10000),
 			rest      => $i->{rest},
 			direction => TO_UNICODE,
 			f         => $i->{f},
-			l         => $i->{l} };
+			l         => $i->{l}
+		  };
 	}
 }
 
diff --git a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
index ec184d7..65ffee3 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
@@ -36,11 +36,13 @@ while (<$in>)
 	if ($code >= 0x80 && $ucs >= 0x0080)
 	{
 		push @mapping,
-		  { ucs       => $ucs,
+		  {
+			ucs       => $ucs,
 			code      => $code,
 			direction => BOTH,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
index b580373..79901dc 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
@@ -26,23 +26,29 @@ my $mapping = &read_source("JOHAB.TXT");
 
 # Some extra characters that are not in JOHAB.TXT
 push @$mapping,
-  ( {   direction => BOTH,
+  ( {
+		direction => BOTH,
 		ucs       => 0x20AC,
 		code      => 0xd9e6,
 		comment   => '# EURO SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x00AE,
 		code      => 0xd9e7,
 		comment   => '# REGISTERED SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => BOTH,
+		l         => __LINE__
+	},
+	{
+		direction => BOTH,
 		ucs       => 0x327E,
 		code      => 0xd9e8,
 		comment   => '# CIRCLED HANGUL IEUNG U',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	});
 
 print_conversion_tables($this_script, "JOHAB", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
index d153e4c..bb84458 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
@@ -33,13 +33,15 @@ while (my $line = <$in>)
 		my $ucs2 = hex($u2);
 
 		push @mapping,
-		  { code       => $code,
+		  {
+			code       => $code,
 			ucs        => $ucs1,
 			ucs_second => $ucs2,
 			comment    => $rest,
 			direction  => BOTH,
 			f          => $in_file,
-			l          => $. };
+			l          => $.
+		  };
 	}
 	elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
 	{
@@ -68,12 +70,14 @@ while (my $line = <$in>)
 		}
 
 		push @mapping,
-		  { code      => $code,
+		  {
+			code      => $code,
 			ucs       => $ucs,
 			comment   => $rest,
 			direction => $direction,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
index a50f6f3..738c195 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
@@ -36,53 +36,69 @@ foreach my $i (@$mapping)
 
 # Add these UTF8->SJIS pairs to the table.
 push @$mapping,
-  ( {   direction => FROM_UNICODE,
+  ( {
+		direction => FROM_UNICODE,
 		ucs       => 0x00a2,
 		code      => 0x8191,
 		comment   => '# CENT SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00a3,
 		code      => 0x8192,
 		comment   => '# POUND SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00a5,
 		code      => 0x5c,
 		comment   => '# YEN SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x00ac,
 		code      => 0x81ca,
 		comment   => '# NOT SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x2016,
 		code      => 0x8161,
 		comment   => '# DOUBLE VERTICAL LINE',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x203e,
 		code      => 0x7e,
 		comment   => '# OVERLINE',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x2212,
 		code      => 0x817c,
 		comment   => '# MINUS SIGN',
 		f         => $this_script,
-		l         => __LINE__ },
-	{   direction => FROM_UNICODE,
+		l         => __LINE__
+	},
+	{
+		direction => FROM_UNICODE,
 		ucs       => 0x301c,
 		code      => 0x8160,
 		comment   => '# WAVE DASH',
 		f         => $this_script,
-		l         => __LINE__ });
+		l         => __LINE__
+	});
 
 print_conversion_tables($this_script, "SJIS", $mapping);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
index dc9fb75..4231aaf 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
@@ -39,22 +39,26 @@ while (<$in>)
 	if ($code >= 0x80 && $ucs >= 0x0080)
 	{
 		push @mapping,
-		  { ucs       => $ucs,
+		  {
+			ucs       => $ucs,
 			code      => $code,
 			direction => BOTH,
 			f         => $in_file,
-			l         => $. };
+			l         => $.
+		  };
 	}
 }
 close($in);
 
 # One extra character that's not in the source file.
 push @mapping,
-  { direction => BOTH,
+  {
+	direction => BOTH,
 	code      => 0xa2e8,
 	ucs       => 0x327e,
 	comment   => 'CIRCLED HANGUL IEUNG U',
 	f         => $this_script,
-	l         => __LINE__ };
+	l         => __LINE__
+  };
 
 print_conversion_tables($this_script, "UHC", \@mapping);
diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm
index 03151fa..69ec099 100644
--- a/src/backend/utils/mb/Unicode/convutils.pm
+++ b/src/backend/utils/mb/Unicode/convutils.pm
@@ -18,7 +18,8 @@ use constant {
 	NONE         => 0,
 	TO_UNICODE   => 1,
 	FROM_UNICODE => 2,
-	BOTH         => 3 };
+	BOTH         => 3
+};
 
 #######################################################################
 # read_source - common routine to read source file
@@ -56,7 +57,8 @@ sub read_source
 			comment   => $4,
 			direction => BOTH,
 			f         => $fname,
-			l         => $. };
+			l         => $.
+		};
 
 		# Ignore pure ASCII mappings. PostgreSQL character conversion code
 		# never even passes these to the conversion code.
@@ -370,9 +372,11 @@ sub print_radix_table
 	}
 
 	unshift @segments,
-	  { header  => "Dummy map, for invalid values",
+	  {
+		header  => "Dummy map, for invalid values",
 		min_idx => 0,
-		max_idx => $widest_range };
+		max_idx => $widest_range
+	  };
 
 	###
 	### Eliminate overlapping zeros
@@ -655,12 +659,14 @@ sub build_segments_recurse
 	if ($level == $depth)
 	{
 		push @segments,
-		  { header => $header . ", leaf: ${path}xx",
+		  {
+			header => $header . ", leaf: ${path}xx",
 			label  => $label,
 			level  => $level,
 			depth  => $depth,
 			path   => $path,
-			values => $map };
+			values => $map
+		  };
 	}
 	else
 	{
@@ -678,12 +684,14 @@ sub build_segments_recurse
 		}
 
 		push @segments,
-		  { header => $header . ", byte #$level: ${path}xx",
+		  {
+			header => $header . ", byte #$level: ${path}xx",
 			label  => $label,
 			level  => $level,
 			depth  => $depth,
 			path   => $path,
-			values => \%children };
+			values => \%children
+		  };
 	}
 	return @segments;
 }
@@ -776,7 +784,8 @@ sub make_charmap_combined
 				code        => $c->{code},
 				comment     => $c->{comment},
 				f           => $c->{f},
-				l           => $c->{l} };
+				l           => $c->{l}
+			};
 			push @combined, $entry;
 		}
 	}
diff --git a/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
index fdedd2f..c84674c 100644
--- a/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
+++ b/src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl
@@ -73,8 +73,10 @@ sub run_check
 	create_files();
 
 	command_ok(
-		[   'pg_archivecleanup', '-x', '.gz', $tempdir,
-			$walfiles[2] . $suffix ],
+		[
+			'pg_archivecleanup', '-x', '.gz', $tempdir,
+			$walfiles[2] . $suffix
+		],
 		"$test_name: runs");
 
 	ok(!-f "$tempdir/$walfiles[0]",
diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
index d7ab36b..6b2e028 100644
--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl
+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
@@ -159,8 +159,10 @@ isnt(slurp_file("$tempdir/backup/backup_label"),
 rmtree("$tempdir/backup");
 
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backup2", '--waldir',
-		"$tempdir/xlog2" ],
+	[
+		'pg_basebackup', '-D', "$tempdir/backup2", '--waldir',
+		"$tempdir/xlog2"
+	],
 	'separate xlog directory');
 ok(-f "$tempdir/backup2/PG_VERSION", 'backup was created');
 ok(-d "$tempdir/xlog2/",             'xlog directory was created');
@@ -179,8 +181,10 @@ $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp', "-T/foo=" ],
 	'-T with empty new directory fails');
 $node->command_fails(
-	[   'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp',
-		"-T/foo=/bar=/baz" ],
+	[
+		'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp',
+		"-T/foo=/bar=/baz"
+	],
 	'-T with multiple = fails');
 $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup_foo", '-Fp', "-Tfoo=/bar" ],
@@ -279,8 +283,10 @@ SKIP:
 		'plain format with tablespaces fails without tablespace mapping');
 
 	$node->command_ok(
-		[   'pg_basebackup', '-D', "$tempdir/backup1", '-Fp',
-			"-T$shorter_tempdir/tblspc1=$tempdir/tbackup/tblspc1" ],
+		[
+			'pg_basebackup', '-D', "$tempdir/backup1", '-Fp',
+			"-T$shorter_tempdir/tblspc1=$tempdir/tbackup/tblspc1"
+		],
 		'plain format with tablespaces succeeds with tablespace mapping');
 	ok(-d "$tempdir/tbackup/tblspc1", 'tablespace was relocated');
 	opendir(my $dh, "$pgdata/pg_tblspc") or die;
@@ -330,8 +336,10 @@ SKIP:
 	$node->safe_psql('postgres',
 		"CREATE TABLESPACE tblspc2 LOCATION '$shorter_tempdir/tbl=spc2';");
 	$node->command_ok(
-		[   'pg_basebackup', '-D', "$tempdir/backup3", '-Fp',
-			"-T$shorter_tempdir/tbl\\=spc2=$tempdir/tbackup/tbl\\=spc2" ],
+		[
+			'pg_basebackup', '-D', "$tempdir/backup3", '-Fp',
+			"-T$shorter_tempdir/tbl\\=spc2=$tempdir/tbackup/tbl\\=spc2"
+		],
 		'mapping tablespace with = sign in path');
 	ok(-d "$tempdir/tbackup/tbl=spc2",
 		'tablespace with = sign was relocated');
@@ -389,17 +397,21 @@ $node->command_ok(
 ok(-f "$tempdir/backupxst/pg_wal.tar", "tar file was created");
 rmtree("$tempdir/backupxst");
 $node->command_ok(
-	[   'pg_basebackup',         '-D',
+	[
+		'pg_basebackup',         '-D',
 		"$tempdir/backupnoslot", '-X',
-		'stream',                '--no-slot' ],
+		'stream',                '--no-slot'
+	],
 	'pg_basebackup -X stream runs with --no-slot');
 rmtree("$tempdir/backupnoslot");
 
 $node->command_fails(
-	[   'pg_basebackup',             '-D',
+	[
+		'pg_basebackup',             '-D',
 		"$tempdir/backupxs_sl_fail", '-X',
 		'stream',                    '-S',
-		'slot0' ],
+		'slot0'
+	],
 	'pg_basebackup fails with nonexistent replication slot');
 
 $node->command_fails(
@@ -407,10 +419,12 @@ $node->command_fails(
 	'pg_basebackup -C fails without slot name');
 
 $node->command_fails(
-	[   'pg_basebackup',          '-D',
+	[
+		'pg_basebackup',          '-D',
 		"$tempdir/backupxs_slot", '-C',
 		'-S',                     'slot0',
-		'--no-slot' ],
+		'--no-slot'
+	],
 	'pg_basebackup fails with -C -S --no-slot');
 
 $node->command_ok(
@@ -446,8 +460,10 @@ $node->command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/fail", '-S', 'slot1', '-X', 'none' ],
 	'pg_basebackup with replication slot fails without WAL streaming');
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X',
-		'stream',        '-S', 'slot1' ],
+	[
+		'pg_basebackup', '-D', "$tempdir/backupxs_sl", '-X',
+		'stream',        '-S', 'slot1'
+	],
 	'pg_basebackup -X stream with replication slot runs');
 $lsn = $node->safe_psql('postgres',
 	q{SELECT restart_lsn FROM pg_replication_slots WHERE slot_name = 'slot1'}
@@ -456,8 +472,10 @@ like($lsn, qr!^0/[0-9A-Z]{7,8}$!, 'restart LSN of slot has advanced');
 rmtree("$tempdir/backupxs_sl");
 
 $node->command_ok(
-	[   'pg_basebackup', '-D', "$tempdir/backupxs_sl_R", '-X',
-		'stream',        '-S', 'slot1',                  '-R' ],
+	[
+		'pg_basebackup', '-D', "$tempdir/backupxs_sl_R", '-X',
+		'stream',        '-S', 'slot1',                  '-R'
+	],
 	'pg_basebackup with replication slot and -R runs');
 like(
 	slurp_file("$tempdir/backupxs_sl_R/recovery.conf"),
diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
index 0793f9c..6e2f051 100644
--- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl
+++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
@@ -57,8 +57,10 @@ $primary->psql('postgres',
 
 # Stream up to the given position.
 $primary->command_ok(
-	[   'pg_receivewal', '-D',     $stream_dir,     '--verbose',
-		'--endpos',      $nextlsn, '--synchronous', '--no-loop' ],
+	[
+		'pg_receivewal', '-D',     $stream_dir,     '--verbose',
+		'--endpos',      $nextlsn, '--synchronous', '--no-loop'
+	],
 	'streaming some WAL with --synchronous');
 
 # Permissions on WAL files should be default
diff --git a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
index e9d0941..99154bc 100644
--- a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
+++ b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl
@@ -29,15 +29,19 @@ $node->command_fails([ 'pg_recvlogical', '-S', 'test' ],
 $node->command_fails([ 'pg_recvlogical', '-S', 'test', '-d', 'postgres' ],
 	'pg_recvlogical needs an action');
 $node->command_fails(
-	[   'pg_recvlogical',           '-S',
+	[
+		'pg_recvlogical',           '-S',
 		'test',                     '-d',
-		$node->connstr('postgres'), '--start' ],
+		$node->connstr('postgres'), '--start'
+	],
 	'no destination file');
 
 $node->command_ok(
-	[   'pg_recvlogical',           '-S',
+	[
+		'pg_recvlogical',           '-S',
 		'test',                     '-d',
-		$node->connstr('postgres'), '--create-slot' ],
+		$node->connstr('postgres'), '--create-slot'
+	],
 	'slot created');
 
 my $slot = $node->slot('test');
@@ -51,6 +55,8 @@ my $nextlsn =
 chomp($nextlsn);
 
 $node->command_ok(
-	[   'pg_recvlogical', '-S', 'test', '-d', $node->connstr('postgres'),
-		'--start', '--endpos', "$nextlsn", '--no-loop', '-f', '-' ],
+	[
+		'pg_recvlogical', '-S', 'test', '-d', $node->connstr('postgres'),
+		'--start', '--endpos', "$nextlsn", '--no-loop', '-f', '-'
+	],
 	'replayed a transaction');
diff --git a/src/bin/pg_controldata/t/001_pg_controldata.pl b/src/bin/pg_controldata/t/001_pg_controldata.pl
index a9862ae..3b63ad2 100644
--- a/src/bin/pg_controldata/t/001_pg_controldata.pl
+++ b/src/bin/pg_controldata/t/001_pg_controldata.pl
@@ -33,7 +33,9 @@ close $fh;
 command_checks_all(
 	[ 'pg_controldata', $node->data_dir ],
 	0,
-	[   qr/WARNING: Calculated CRC checksum does not match value stored in file/,
-		qr/WARNING: invalid WAL segment size/ ],
+	[
+		qr/WARNING: Calculated CRC checksum does not match value stored in file/,
+		qr/WARNING: invalid WAL segment size/
+	],
 	[qr/^$/],
 	'pg_controldata with corrupted pg_control');
diff --git a/src/bin/pg_ctl/t/001_start_stop.pl b/src/bin/pg_ctl/t/001_start_stop.pl
index 5bbb799..50a57d0 100644
--- a/src/bin/pg_ctl/t/001_start_stop.pl
+++ b/src/bin/pg_ctl/t/001_start_stop.pl
@@ -36,7 +36,8 @@ else
 close $conf;
 my $ctlcmd = [
 	'pg_ctl', 'start', '-D', "$tempdir/data", '-l',
-	"$TestLib::log_path/001_start_stop_server.log" ];
+	"$TestLib::log_path/001_start_stop_server.log"
+];
 if ($Config{osname} ne 'msys')
 {
 	command_like($ctlcmd, qr/done.*server started/s, 'pg_ctl start');
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 81cd65e..fe036b5 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -50,7 +50,9 @@ my %pgdump_runs = (
 		restore_cmd => [
 			'pg_restore', '-Fc', '--verbose',
 			"--file=$tempdir/binary_upgrade.sql",
-			"$tempdir/binary_upgrade.dump", ], },
+			"$tempdir/binary_upgrade.dump",
+		],
+	},
 	clean => {
 		dump_cmd => [
 			'pg_dump',
@@ -58,7 +60,8 @@ my %pgdump_runs = (
 			"--file=$tempdir/clean.sql",
 			'-c',
 			'-d', 'postgres',    # alternative way to specify database
-		], },
+		],
+	},
 	clean_if_exists => {
 		dump_cmd => [
 			'pg_dump',
@@ -67,12 +70,16 @@ my %pgdump_runs = (
 			'-c',
 			'--if-exists',
 			'--encoding=UTF8',    # no-op, just tests that option is accepted
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	column_inserts => {
 		dump_cmd => [
 			'pg_dump',                            '--no-sync',
 			"--file=$tempdir/column_inserts.sql", '-a',
-			'--column-inserts',                   'postgres', ], },
+			'--column-inserts',                   'postgres',
+		],
+	},
 	createdb => {
 		dump_cmd => [
 			'pg_dump',
@@ -81,7 +88,9 @@ my %pgdump_runs = (
 			'-C',
 			'-R',    # no-op, just for testing
 			'-v',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	data_only => {
 		dump_cmd => [
 			'pg_dump',
@@ -91,78 +100,102 @@ my %pgdump_runs = (
 			'--superuser=test_superuser',
 			'--disable-triggers',
 			'-v',    # no-op, just make sure it works
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			'-f',      "$tempdir/defaults.sql",
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults_no_public => {
 		database => 'regress_pg_dump_test',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-f', "$tempdir/defaults_no_public.sql",
-			'regress_pg_dump_test', ], },
+			'regress_pg_dump_test',
+		],
+	},
 	defaults_no_public_clean => {
 		database => 'regress_pg_dump_test',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-c', '-f',
 			"$tempdir/defaults_no_public_clean.sql",
-			'regress_pg_dump_test', ], },
+			'regress_pg_dump_test',
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_custom_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '-Fc', '-Z6',
-			"--file=$tempdir/defaults_custom_format.dump", 'postgres', ],
+			"--file=$tempdir/defaults_custom_format.dump", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', '-Fc',
 			"--file=$tempdir/defaults_custom_format.sql",
-			"$tempdir/defaults_custom_format.dump", ], },
+			"$tempdir/defaults_custom_format.dump",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_dir_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump',                             '-Fd',
-			"--file=$tempdir/defaults_dir_format", 'postgres', ],
+			"--file=$tempdir/defaults_dir_format", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', '-Fd',
 			"--file=$tempdir/defaults_dir_format.sql",
-			"$tempdir/defaults_dir_format", ], },
+			"$tempdir/defaults_dir_format",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_parallel => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '-Fd', '-j2', "--file=$tempdir/defaults_parallel",
-			'postgres', ],
+			'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_parallel.sql",
-			"$tempdir/defaults_parallel", ], },
+			"$tempdir/defaults_parallel",
+		],
+	},
 
 	# Do not use --no-sync to give test coverage for data sync.
 	defaults_tar_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump',                                 '-Ft',
-			"--file=$tempdir/defaults_tar_format.tar", 'postgres', ],
+			"--file=$tempdir/defaults_tar_format.tar", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			'--format=tar',
 			"--file=$tempdir/defaults_tar_format.sql",
-			"$tempdir/defaults_tar_format.tar", ], },
+			"$tempdir/defaults_tar_format.tar",
+		],
+	},
 	exclude_dump_test_schema => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/exclude_dump_test_schema.sql",
-			'--exclude-schema=dump_test', 'postgres', ], },
+			'--exclude-schema=dump_test', 'postgres',
+		],
+	},
 	exclude_test_table => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/exclude_test_table.sql",
-			'--exclude-table=dump_test.test_table', 'postgres', ], },
+			'--exclude-table=dump_test.test_table', 'postgres',
+		],
+	},
 	exclude_test_table_data => {
 		dump_cmd => [
 			'pg_dump',
@@ -170,39 +203,55 @@ my %pgdump_runs = (
 			"--file=$tempdir/exclude_test_table_data.sql",
 			'--exclude-table-data=dump_test.test_table',
 			'--no-unlogged-table-data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall', '-v', "--file=$tempdir/pg_dumpall_globals.sql",
-			'-g', '--no-sync', ], },
+			'-g', '--no-sync',
+		],
+	},
 	pg_dumpall_globals_clean => {
 		dump_cmd => [
 			'pg_dumpall', "--file=$tempdir/pg_dumpall_globals_clean.sql",
-			'-g', '-c', '--no-sync', ], },
+			'-g', '-c', '--no-sync',
+		],
+	},
 	pg_dumpall_dbprivs => {
 		dump_cmd => [
 			'pg_dumpall', '--no-sync',
-			"--file=$tempdir/pg_dumpall_dbprivs.sql", ], },
+			"--file=$tempdir/pg_dumpall_dbprivs.sql",
+		],
+	},
 	no_blobs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_blobs.sql", '-B',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_privs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_privs.sql", '-x',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_owner => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_owner.sql", '-O',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	only_dump_test_schema => {
 		dump_cmd => [
 			'pg_dump', '--no-sync',
 			"--file=$tempdir/only_dump_test_schema.sql",
-			'--schema=dump_test', 'postgres', ], },
+			'--schema=dump_test', 'postgres',
+		],
+	},
 	only_dump_test_table => {
 		dump_cmd => [
 			'pg_dump',
@@ -210,7 +259,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/only_dump_test_table.sql",
 			'--table=dump_test.test_table',
 			'--lock-wait-timeout=1000000',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	role => {
 		dump_cmd => [
 			'pg_dump',
@@ -218,7 +269,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/role.sql",
 			'--role=regress_dump_test_role',
 			'--schema=dump_test_second_schema',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	role_parallel => {
 		test_key => 'role',
 		dump_cmd => [
@@ -229,39 +282,54 @@ my %pgdump_runs = (
 			"--file=$tempdir/role_parallel",
 			'--role=regress_dump_test_role',
 			'--schema=dump_test_second_schema',
-			'postgres', ],
+			'postgres',
+		],
 		restore_cmd => [
 			'pg_restore', "--file=$tempdir/role_parallel.sql",
-			"$tempdir/role_parallel", ], },
+			"$tempdir/role_parallel",
+		],
+	},
 	schema_only => {
 		dump_cmd => [
 			'pg_dump',                         '--format=plain',
 			"--file=$tempdir/schema_only.sql", '--no-sync',
-			'-s',                              'postgres', ], },
+			'-s',                              'postgres',
+		],
+	},
 	section_pre_data => {
 		dump_cmd => [
 			'pg_dump',            "--file=$tempdir/section_pre_data.sql",
 			'--section=pre-data', '--no-sync',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_data => {
 		dump_cmd => [
 			'pg_dump',        "--file=$tempdir/section_data.sql",
 			'--section=data', '--no-sync',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_post_data => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/section_post_data.sql",
-			'--section=post-data', '--no-sync', 'postgres', ], },
+			'--section=post-data', '--no-sync', 'postgres',
+		],
+	},
 	test_schema_plus_blobs => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/test_schema_plus_blobs.sql",
 
-			'--schema=dump_test', '-b', '-B', '--no-sync', 'postgres', ], },
+			'--schema=dump_test', '-b', '-B', '--no-sync', 'postgres',
+		],
+	},
 	with_oids => {
 		dump_cmd => [
 			'pg_dump',   '--oids',
 			'--no-sync', "--file=$tempdir/with_oids.sql",
-			'postgres', ], },);
+			'postgres',
+		],
+	},);
 
 ###############################################################
 # Definition of the tests to run.
@@ -338,7 +406,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE' => {
 		create_order => 55,
@@ -351,7 +421,8 @@ my %tests = (
 			\QREVOKE ALL ON FUNCTIONS  FROM PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE SELECT'
 	  => {
@@ -368,7 +439,8 @@ my %tests = (
 			\QGRANT INSERT,REFERENCES,DELETE,TRIGGER,TRUNCATE,UPDATE ON TABLES  TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	  },
 
 	'ALTER ROLE regress_dump_test_role' => {
 		regexp => qr/^
@@ -379,23 +451,28 @@ my %tests = (
 		like => {
 			pg_dumpall_dbprivs       => 1,
 			pg_dumpall_globals       => 1,
-			pg_dumpall_globals_clean => 1, }, },
+			pg_dumpall_globals_clean => 1,
+		},
+	},
 
 	'ALTER COLLATION test0 OWNER TO' => {
 		regexp    => qr/^ALTER COLLATION public.test0 OWNER TO .*;/m,
 		collation => 1,
 		like      => { %full_runs, section_pre_data => 1, },
-		unlike    => { %dump_test_schema_runs, no_owner => 1, }, },
+		unlike    => { %dump_test_schema_runs, no_owner => 1, },
+	},
 
 	'ALTER FOREIGN DATA WRAPPER dummy OWNER TO' => {
 		regexp => qr/^ALTER FOREIGN DATA WRAPPER dummy OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SERVER s1 OWNER TO' => {
 		regexp => qr/^ALTER SERVER s1 OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER FUNCTION dump_test.pltestlang_call_handler() OWNER TO' => {
 		regexp => qr/^
@@ -406,7 +483,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER OPERATOR FAMILY dump_test.op_family OWNER TO' => {
 		regexp => qr/^
@@ -417,7 +496,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER OPERATOR FAMILY dump_test.op_family USING btree' => {
 		create_order => 75,
@@ -442,7 +523,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER OPERATOR CLASS dump_test.op_class OWNER TO' => {
 		regexp => qr/^
@@ -453,12 +535,15 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER PUBLICATION pub1 OWNER TO' => {
 		regexp => qr/^ALTER PUBLICATION pub1 OWNER TO .*;/m,
 		like   => { %full_runs, section_post_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER LARGE OBJECT ... OWNER TO' => {
 		regexp => qr/^ALTER LARGE OBJECT \d+ OWNER TO .*;/m,
@@ -467,16 +552,20 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			no_blobs    => 1,
 			no_owner    => 1,
-			schema_only => 1, }, },
+			schema_only => 1,
+		},
+	},
 
 	'ALTER PROCEDURAL LANGUAGE pltestlang OWNER TO' => {
 		regexp => qr/^ALTER PROCEDURAL LANGUAGE pltestlang OWNER TO .*;/m,
 		like   => { %full_runs, section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SCHEMA dump_test OWNER TO' => {
 		regexp => qr/^ALTER SCHEMA dump_test OWNER TO .*;/m,
@@ -484,15 +573,19 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER SCHEMA dump_test_second_schema OWNER TO' => {
 		regexp => qr/^ALTER SCHEMA dump_test_second_schema OWNER TO .*;/m,
 		like   => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER SEQUENCE test_table_col1_seq' => {
 		regexp => qr/^
@@ -502,10 +595,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER SEQUENCE test_third_table_col1_seq' => {
 		regexp => qr/^
@@ -514,7 +610,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ADD CONSTRAINT ... PRIMARY KEY' => {
 		regexp => qr/^
@@ -525,10 +623,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col1 SET STATISTICS 90' => {
 		create_order => 93,
@@ -541,10 +642,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col2 SET STORAGE' => {
 		create_order => 94,
@@ -557,10 +661,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col3 SET STORAGE' => {
 		create_order => 95,
@@ -573,10 +680,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY test_table ALTER COLUMN col4 SET n_distinct' => {
 		create_order => 95,
@@ -589,10 +699,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE ONLY dump_test.measurement ATTACH PARTITION measurement_y2006m2'
 	  => {
@@ -600,7 +713,8 @@ my %tests = (
 			\QALTER TABLE ONLY dump_test.measurement ATTACH PARTITION dump_test_second_schema.measurement_y2006m2 \E
 			\QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n
 			/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	  },
 
 	'ALTER TABLE test_table CLUSTER ON test_table_pkey' => {
 		create_order => 96,
@@ -613,10 +727,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE test_table DISABLE TRIGGER ALL' => {
 		regexp => qr/^
@@ -625,7 +742,8 @@ my %tests = (
 			\QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E
 			\n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n\n\n
 			\QALTER TABLE dump_test.test_table ENABLE TRIGGER ALL;\E/xm,
-		like => { data_only => 1, }, },
+		like => { data_only => 1, },
+	},
 
 	'ALTER FOREIGN TABLE foreign_table ALTER COLUMN c1 OPTIONS' => {
 		regexp => qr/^
@@ -635,7 +753,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER TABLE test_table OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.test_table OWNER TO .*;/m,
@@ -643,11 +762,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE test_table ENABLE ROW LEVEL SECURITY' => {
 		create_order => 23,
@@ -659,10 +781,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER TABLE test_second_table OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.test_second_table OWNER TO .*;/m,
@@ -670,7 +795,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE test_third_table OWNER TO' => {
 		regexp =>
@@ -678,8 +805,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER TABLE measurement OWNER TO' => {
 		regexp => qr/^ALTER TABLE dump_test.measurement OWNER TO .*;/m,
@@ -687,7 +816,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TABLE measurement_y2006m2 OWNER TO' => {
 		regexp =>
@@ -695,8 +826,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_owner => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_owner => 1, },
+	},
 
 	'ALTER FOREIGN TABLE foreign_table OWNER TO' => {
 		regexp =>
@@ -705,7 +838,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 OWNER TO' => {
 		regexp =>
@@ -714,7 +849,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_owner                 => 1, }, },
+			no_owner                 => 1,
+		},
+	},
 
 	'ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 OWNER TO' => {
 		regexp =>
@@ -725,7 +862,9 @@ my %tests = (
 			exclude_dump_test_schema => 1,
 			only_dump_test_table     => 1,
 			no_owner                 => 1,
-			role                     => 1, }, },
+			role                     => 1,
+		},
+	},
 
 	'BLOB create (using lo_from_bytea)' => {
 		create_order => 50,
@@ -737,10 +876,13 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			schema_only => 1,
-			no_blobs    => 1, }, },
+			no_blobs    => 1,
+		},
+	},
 
 	'BLOB load (using lo_from_bytea)' => {
 		regexp => qr/^
@@ -754,23 +896,28 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_data           => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			binary_upgrade => 1,
 			no_blobs       => 1,
-			schema_only    => 1, }, },
+			schema_only    => 1,
+		},
+	},
 
 	'COMMENT ON DATABASE postgres' => {
 		regexp => qr/^COMMENT ON DATABASE postgres IS .*;/m,
 
 		# Should appear in the same tests as "CREATE DATABASE postgres"
-		like => { createdb => 1, }, },
+		like => { createdb => 1, },
+	},
 
 	'COMMENT ON EXTENSION plpgsql' => {
 		regexp => qr/^COMMENT ON EXTENSION plpgsql IS .*;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'COMMENT ON TABLE dump_test.test_table' => {
 		create_order => 36,
@@ -782,10 +929,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'COMMENT ON COLUMN dump_test.test_table.col1' => {
 		create_order => 36,
@@ -798,10 +948,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'COMMENT ON COLUMN dump_test.composite.f1' => {
 		create_order => 44,
@@ -812,7 +965,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLUMN dump_test.test_second_table.col1' => {
 		create_order => 63,
@@ -823,7 +977,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLUMN dump_test.test_second_table.col2' => {
 		create_order => 64,
@@ -834,7 +989,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON CONVERSION dump_test.test_conversion' => {
 		create_order => 79,
@@ -844,7 +1000,8 @@ my %tests = (
 		  qr/^COMMENT ON CONVERSION dump_test.test_conversion IS 'comment on test conversion';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON COLLATION test0' => {
 		create_order => 77,
@@ -853,7 +1010,8 @@ my %tests = (
 		regexp =>
 		  qr/^COMMENT ON COLLATION public.test0 IS 'comment on test0 collation';/m,
 		collation => 1,
-		like      => { %full_runs, section_pre_data => 1, }, },
+		like      => { %full_runs, section_pre_data => 1, },
+	},
 
 	'COMMENT ON LARGE OBJECT ...' => {
 		create_order => 65,
@@ -872,10 +1030,13 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			no_blobs    => 1,
-			schema_only => 1, }, },
+			schema_only => 1,
+		},
+	},
 
 	'COMMENT ON PUBLICATION pub1' => {
 		create_order => 55,
@@ -883,7 +1044,8 @@ my %tests = (
 					   IS \'comment on publication\';',
 		regexp =>
 		  qr/^COMMENT ON PUBLICATION pub1 IS 'comment on publication';/m,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'COMMENT ON SUBSCRIPTION sub1' => {
 		create_order => 55,
@@ -891,7 +1053,8 @@ my %tests = (
 					   IS \'comment on subscription\';',
 		regexp =>
 		  qr/^COMMENT ON SUBSCRIPTION sub1 IS 'comment on subscription';/m,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
 		create_order => 84,
@@ -902,7 +1065,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 IS 'comment on text search configuration';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
 		create_order => 84,
@@ -913,7 +1077,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 IS 'comment on text search dictionary';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
 		create_order => 84,
@@ -923,7 +1088,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1 IS 'comment on text search parser';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
 		create_order => 84,
@@ -933,7 +1099,8 @@ my %tests = (
 		  qr/^COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 IS 'comment on text search template';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.planets - ENUM' => {
 		create_order => 68,
@@ -943,7 +1110,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.planets IS 'comment on enum type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.textrange - RANGE' => {
 		create_order => 69,
@@ -953,7 +1121,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.textrange IS 'comment on range type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.int42 - Regular' => {
 		create_order => 70,
@@ -963,7 +1132,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.int42 IS 'comment on regular type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COMMENT ON TYPE dump_test.undefined - Undefined' => {
 		create_order => 71,
@@ -973,7 +1143,8 @@ my %tests = (
 		  qr/^COMMENT ON TYPE dump_test.undefined IS 'comment on undefined type';/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'COPY test_table' => {
 		create_order => 4,
@@ -988,13 +1159,16 @@ my %tests = (
 			%dump_test_schema_runs,
 			data_only            => 1,
 			only_dump_test_table => 1,
-			section_data         => 1, },
+			section_data         => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
 			exclude_test_table_data  => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY fk_reference_test_table' => {
 		create_order => 22,
@@ -1010,11 +1184,14 @@ my %tests = (
 			data_only               => 1,
 			exclude_test_table      => 1,
 			exclude_test_table_data => 1,
-			section_data            => 1, },
+			section_data            => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	# In a data-only dump, we try to actually order according to FKs,
 	# so this check is just making sure that the referring table comes after
@@ -1026,7 +1203,8 @@ my %tests = (
 			\QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E
 			\n(?:\d\n){5}\\\.\n
 			/xms,
-		like => { data_only => 1, }, },
+		like => { data_only => 1, },
+	},
 
 	'COPY test_second_table' => {
 		create_order => 7,
@@ -1041,11 +1219,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_third_table' => {
 		create_order => 12,
@@ -1060,19 +1241,23 @@ my %tests = (
 			%full_runs,
 			data_only    => 1,
 			role         => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade          => 1,
 			exclude_test_table_data => 1,
 			schema_only             => 1,
-			with_oids               => 1, }, },
+			with_oids               => 1,
+		},
+	},
 
 	'COPY test_third_table WITH OIDS' => {
 		regexp => qr/^
 			\QCOPY dump_test_second_schema.test_third_table (col1) WITH OIDS FROM stdin;\E
 			\n(?:\d+\t\d\n){9}\\\.\n
 			/xm,
-		like => { with_oids => 1, }, },
+		like => { with_oids => 1, },
+	},
 
 	'COPY test_fourth_table' => {
 		create_order => 7,
@@ -1086,11 +1271,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_fifth_table' => {
 		create_order => 54,
@@ -1104,11 +1292,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'COPY test_table_identity' => {
 		create_order => 54,
@@ -1122,44 +1313,53 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			data_only    => 1,
-			section_data => 1, },
+			section_data => 1,
+		},
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'INSERT INTO test_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test.test_table\ \(col1,\ col2,\ col3,\ col4\)\ VALUES\ \(\d,\ NULL,\ NULL,\ NULL\);\n){9}
 			/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_second_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test.test_second_table\ \(col1,\ col2\)
 			   \ VALUES\ \(\d,\ '\d'\);\n){9}/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_third_table' => {
 		regexp => qr/^
 			(?:INSERT\ INTO\ dump_test_second_schema.test_third_table\ \(col1\)
 			   \ VALUES\ \(\d\);\n){9}/xm,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_fourth_table' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_fourth_table DEFAULT VALUES;\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_fifth_table' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_fifth_table (col1, col2, col3, col4, col5) VALUES (NULL, true, false, B'11001', 'NaN');\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'INSERT INTO test_table_identity' => {
 		regexp =>
 		  qr/^\QINSERT INTO dump_test.test_table_identity (col1, col2) OVERRIDING SYSTEM VALUE VALUES (1, 'test');\E/m,
-		like => { column_inserts => 1, }, },
+		like => { column_inserts => 1, },
+	},
 
 	'CREATE ROLE regress_dump_test_role' => {
 		create_order => 1,
@@ -1168,7 +1368,9 @@ my %tests = (
 		like         => {
 			pg_dumpall_dbprivs       => 1,
 			pg_dumpall_globals       => 1,
-			pg_dumpall_globals_clean => 1, }, },
+			pg_dumpall_globals_clean => 1,
+		},
+	},
 
 	'CREATE ACCESS METHOD gist2' => {
 		create_order => 52,
@@ -1176,7 +1378,8 @@ my %tests = (
 		  'CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;',
 		regexp =>
 		  qr/CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE COLLATION test0 FROM "C"' => {
 		create_order => 76,
@@ -1184,7 +1387,8 @@ my %tests = (
 		regexp       => qr/^
 		  \QCREATE COLLATION public.test0 (provider = libc, locale = 'C');\E/xm,
 		collation => 1,
-		like      => { %full_runs, section_pre_data => 1, }, },
+		like      => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE CAST FOR timestamptz' => {
 		create_order => 51,
@@ -1192,13 +1396,15 @@ my %tests = (
 		  'CREATE CAST (timestamptz AS interval) WITH FUNCTION age(timestamptz) AS ASSIGNMENT;',
 		regexp =>
 		  qr/CREATE CAST \(timestamp with time zone AS interval\) WITH FUNCTION pg_catalog\.age\(timestamp with time zone\) AS ASSIGNMENT;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE DATABASE postgres' => {
 		regexp => qr/^
 			\QCREATE DATABASE postgres WITH TEMPLATE = template0 \E
 			.*;/xm,
-		like => { createdb => 1, }, },
+		like => { createdb => 1, },
+	},
 
 	'CREATE DATABASE dump_test' => {
 		create_order => 47,
@@ -1206,7 +1412,8 @@ my %tests = (
 		regexp       => qr/^
 			\QCREATE DATABASE dump_test WITH TEMPLATE = template0 \E
 			.*;/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'CREATE EXTENSION ... plpgsql' => {
 		regexp => qr/^
@@ -1214,7 +1421,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'CREATE AGGREGATE dump_test.newavg' => {
 		create_order => 25,
@@ -1238,8 +1446,10 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			exclude_test_table => 1,
-			section_pre_data   => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+			section_pre_data   => 1,
+		},
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE CONVERSION dump_test.test_conversion' => {
 		create_order => 78,
@@ -1249,7 +1459,8 @@ my %tests = (
 		  qr/^\QCREATE DEFAULT CONVERSION dump_test.test_conversion FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE DOMAIN dump_test.us_postal_code' => {
 		create_order => 29,
@@ -1267,7 +1478,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.pltestlang_call_handler' => {
 		create_order => 17,
@@ -1283,7 +1495,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.trigger_func' => {
 		create_order => 30,
@@ -1298,7 +1511,8 @@ my %tests = (
 			\$\$;/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.event_trigger_func' => {
 		create_order => 32,
@@ -1313,7 +1527,8 @@ my %tests = (
 			\$\$;/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE OPERATOR FAMILY dump_test.op_family' => {
 		create_order => 73,
@@ -1324,7 +1539,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE OPERATOR CLASS dump_test.op_class' => {
 		create_order => 74,
@@ -1351,7 +1567,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE EVENT TRIGGER test_event_trigger' => {
 		create_order => 33,
@@ -1363,7 +1580,8 @@ my %tests = (
 			\QON ddl_command_start\E
 			\n\s+\QEXECUTE PROCEDURE dump_test.event_trigger_func();\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE TRIGGER test_trigger' => {
 		create_order => 31,
@@ -1380,10 +1598,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_test_table       => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TYPE dump_test.planets AS ENUM' => {
 		create_order => 37,
@@ -1399,7 +1620,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			binary_upgrade           => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TYPE dump_test.planets AS ENUM pg_upgrade' => {
 		regexp => qr/^
@@ -1411,7 +1634,8 @@ my %tests = (
 			\n.*^
 			\QALTER TYPE dump_test.planets ADD VALUE 'mars';\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TYPE dump_test.textrange AS RANGE' => {
 		create_order => 38,
@@ -1424,7 +1648,8 @@ my %tests = (
 			\n\);/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.int42' => {
 		create_order => 39,
@@ -1432,7 +1657,8 @@ my %tests = (
 		regexp       => qr/^CREATE TYPE dump_test.int42;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
 		create_order => 80,
@@ -1443,7 +1669,8 @@ my %tests = (
 			\s+\QPARSER = pg_catalog."default" );\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 ...' => {
 		regexp => qr/^
@@ -1507,7 +1734,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
 		create_order => 81,
@@ -1518,7 +1746,8 @@ my %tests = (
 			\s+\QLEXIZE = dsimple_lexize );\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
 		create_order => 82,
@@ -1533,7 +1762,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
 		create_order => 83,
@@ -1545,7 +1775,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.int42_in' => {
 		create_order => 40,
@@ -1559,7 +1790,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FUNCTION dump_test.int42_out' => {
 		create_order => 41,
@@ -1573,7 +1805,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE PROCEDURE dump_test.ptest1' => {
 		create_order => 41,
@@ -1586,7 +1819,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.int42 populated' => {
 		create_order => 42,
@@ -1609,7 +1843,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.composite' => {
 		create_order => 43,
@@ -1625,7 +1860,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TYPE dump_test.undefined' => {
 		create_order => 39,
@@ -1633,19 +1869,22 @@ my %tests = (
 		regexp       => qr/^CREATE TYPE dump_test.undefined;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE FOREIGN DATA WRAPPER dummy' => {
 		create_order => 35,
 		create_sql   => 'CREATE FOREIGN DATA WRAPPER dummy;',
 		regexp       => qr/CREATE FOREIGN DATA WRAPPER dummy;/m,
-		like         => { %full_runs, section_pre_data => 1, }, },
+		like         => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE SERVER s1 FOREIGN DATA WRAPPER dummy' => {
 		create_order => 36,
 		create_sql   => 'CREATE SERVER s1 FOREIGN DATA WRAPPER dummy;',
 		regexp       => qr/CREATE SERVER s1 FOREIGN DATA WRAPPER dummy;/m,
-		like         => { %full_runs, section_pre_data => 1, }, },
+		like         => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE FOREIGN TABLE dump_test.foreign_table SERVER s1' => {
 		create_order => 88,
@@ -1663,7 +1902,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1' => {
 		create_order => 86,
@@ -1671,7 +1911,8 @@ my %tests = (
 		  'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;',
 		regexp =>
 		  qr/CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE TRANSFORM FOR int' => {
 		create_order => 34,
@@ -1679,7 +1920,8 @@ my %tests = (
 		  'CREATE TRANSFORM FOR int LANGUAGE SQL (FROM SQL WITH FUNCTION varchar_transform(internal), TO SQL WITH FUNCTION int4recv(internal));',
 		regexp =>
 		  qr/CREATE TRANSFORM FOR integer LANGUAGE sql \(FROM SQL WITH FUNCTION pg_catalog\.varchar_transform\(internal\), TO SQL WITH FUNCTION pg_catalog\.int4recv\(internal\)\);/m,
-		like => { %full_runs, section_pre_data => 1, }, },
+		like => { %full_runs, section_pre_data => 1, },
+	},
 
 	'CREATE LANGUAGE pltestlang' => {
 		create_order => 18,
@@ -1690,7 +1932,8 @@ my %tests = (
 			\QHANDLER dump_test.pltestlang_call_handler;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview' => {
 		create_order => 20,
@@ -1704,7 +1947,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_second' => {
 		create_order => 21,
@@ -1719,7 +1963,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_third' => {
 		create_order => 58,
@@ -1734,7 +1979,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE MATERIALIZED VIEW matview_fourth' => {
 		create_order => 59,
@@ -1749,7 +1995,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE POLICY p1 ON test_table' => {
 		create_order => 22,
@@ -1764,10 +2011,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p2 ON test_table FOR SELECT' => {
 		create_order => 24,
@@ -1781,10 +2031,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p3 ON test_table FOR INSERT' => {
 		create_order => 25,
@@ -1798,10 +2051,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p4 ON test_table FOR UPDATE' => {
 		create_order => 26,
@@ -1815,10 +2071,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p5 ON test_table FOR DELETE' => {
 		create_order => 27,
@@ -1832,10 +2091,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE POLICY p6 ON test_table AS RESTRICTIVE' => {
 		create_order => 27,
@@ -1849,10 +2111,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_post_data    => 1, },
+			section_post_data    => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE PUBLICATION pub1' => {
 		create_order => 50,
@@ -1860,7 +2125,8 @@ my %tests = (
 		regexp       => qr/^
 			\QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE PUBLICATION pub2' => {
 		create_order => 50,
@@ -1870,7 +2136,8 @@ my %tests = (
 		regexp => qr/^
 			\QCREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish = '');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'CREATE SUBSCRIPTION sub1' => {
 		create_order => 50,
@@ -1880,7 +2147,8 @@ my %tests = (
 		regexp => qr/^
 			\QCREATE SUBSCRIPTION sub1 CONNECTION 'dbname=doesnotexist' PUBLICATION pub1 WITH (connect = false, slot_name = 'sub1');\E
 			/xm,
-		like => { %full_runs, section_post_data => 1, }, },
+		like => { %full_runs, section_post_data => 1, },
+	},
 
 	'ALTER PUBLICATION pub1 ADD TABLE test_table' => {
 		create_order => 51,
@@ -1892,7 +2160,9 @@ my %tests = (
 		like   => { %full_runs, section_post_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'ALTER PUBLICATION pub1 ADD TABLE test_second_table' => {
 		create_order => 52,
@@ -1902,13 +2172,15 @@ my %tests = (
 			\QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_second_table;\E
 			/xm,
 		like => { %full_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SCHEMA public' => {
 		regexp => qr/^CREATE SCHEMA public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'CREATE SCHEMA dump_test' => {
 		create_order => 2,
@@ -1916,7 +2188,8 @@ my %tests = (
 		regexp       => qr/^CREATE SCHEMA dump_test;/m,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SCHEMA dump_test_second_schema' => {
 		create_order => 9,
@@ -1925,7 +2198,9 @@ my %tests = (
 		like         => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'CREATE TABLE test_table' => {
 		create_order => 3,
@@ -1949,10 +2224,13 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
-			exclude_test_table       => 1, }, },
+			exclude_test_table       => 1,
+		},
+	},
 
 	'CREATE TABLE fk_reference_test_table' => {
 		create_order => 21,
@@ -1966,7 +2244,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_second_table' => {
 		create_order => 6,
@@ -1982,7 +2261,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE UNLOGGED TABLE test_third_table WITH OIDS' => {
 		create_order => 11,
@@ -2003,11 +2283,14 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
+			section_pre_data => 1,
+		},
 		unlike => {
 
 			# FIXME figure out why/how binary upgrade drops OIDs.
-			binary_upgrade => 1, }, },
+			binary_upgrade => 1,
+		},
+	},
 
 	'CREATE TABLE measurement PARTITIONED BY' => {
 		create_order => 90,
@@ -2032,7 +2315,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			binary_upgrade           => 1,
-			exclude_dump_test_schema => 1, }, },
+			exclude_dump_test_schema => 1,
+		},
+	},
 
 	'CREATE TABLE measurement_y2006m2 PARTITION OF' => {
 		create_order => 91,
@@ -2049,8 +2334,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { binary_upgrade => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { binary_upgrade => 1, },
+	},
 
 	'CREATE TABLE test_fourth_table_zero_col' => {
 		create_order => 6,
@@ -2062,7 +2349,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_fifth_table' => {
 		create_order => 53,
@@ -2084,7 +2372,8 @@ my %tests = (
 			/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE TABLE test_table_identity' => {
 		create_order => 3,
@@ -2109,7 +2398,8 @@ my %tests = (
 			/xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE STATISTICS extended_stats_no_options' => {
 		create_order => 97,
@@ -2120,7 +2410,8 @@ my %tests = (
 		    /xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE STATISTICS extended_stats_options' => {
 		create_order => 97,
@@ -2131,7 +2422,8 @@ my %tests = (
 		    /xms,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SEQUENCE test_table_col1_seq' => {
 		regexp => qr/^
@@ -2147,8 +2439,10 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+			section_pre_data     => 1,
+		},
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE SEQUENCE test_third_table_col1_seq' => {
 		regexp => qr/^
@@ -2163,7 +2457,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'CREATE UNIQUE INDEX test_third_table_idx ON test_third_table' => {
 		create_order => 13,
@@ -2176,7 +2472,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'CREATE INDEX ON ONLY measurement' => {
 		create_order => 92,
@@ -2201,14 +2499,17 @@ my %tests = (
 			schema_only             => 1,
 			section_post_data       => 1,
 			test_schema_plus_blobs  => 1,
-			with_oids               => 1, },
+			with_oids               => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			only_dump_test_table     => 1,
 			pg_dumpall_globals       => 1,
 			pg_dumpall_globals_clean => 1,
 			role                     => 1,
-			section_pre_data         => 1, }, },
+			section_pre_data         => 1,
+		},
+	},
 
 	'ALTER TABLE measurement PRIMARY KEY' => {
 		all_runs     => 1,
@@ -2222,7 +2523,8 @@ my %tests = (
 		/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'CREATE INDEX ... ON measurement_y2006_m2' => {
 		regexp => qr/^
@@ -2231,7 +2533,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'ALTER INDEX ... ATTACH PARTITION' => {
 		regexp => qr/^
@@ -2240,7 +2544,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			role              => 1,
-			section_post_data => 1, }, },
+			section_post_data => 1,
+		},
+	},
 
 	'ALTER INDEX ... ATTACH PARTITION (primary key)' => {
 		all_runs  => 1,
@@ -2264,14 +2570,17 @@ my %tests = (
 			role                     => 1,
 			schema_only              => 1,
 			section_post_data        => 1,
-			with_oids                => 1, },
+			with_oids                => 1,
+		},
 		unlike => {
 			only_dump_test_schema    => 1,
 			only_dump_test_table     => 1,
 			pg_dumpall_globals       => 1,
 			pg_dumpall_globals_clean => 1,
 			section_pre_data         => 1,
-			test_schema_plus_blobs   => 1, }, },
+			test_schema_plus_blobs   => 1,
+		},
+	},
 
 	'CREATE VIEW test_view' => {
 		create_order => 61,
@@ -2285,7 +2594,8 @@ my %tests = (
 			\n\s+\QWITH LOCAL CHECK OPTION;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	'ALTER VIEW test_view SET DEFAULT' => {
 		create_order => 62,
@@ -2295,7 +2605,8 @@ my %tests = (
 			\QALTER TABLE ONLY dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;\E/xm,
 		like =>
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
-		unlike => { exclude_dump_test_schema => 1, }, },
+		unlike => { exclude_dump_test_schema => 1, },
+	},
 
 	# FIXME
 	'DROP SCHEMA public (for testing without public schema)' => {
@@ -2303,101 +2614,122 @@ my %tests = (
 		create_order => 100,
 		create_sql   => 'DROP SCHEMA public;',
 		regexp       => qr/^DROP SCHEMA public;/m,
-		like         => {}, },
+		like         => {},
+	},
 
 	'DROP SCHEMA public' => {
 		regexp => qr/^DROP SCHEMA public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP SCHEMA IF EXISTS public' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS public;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP EXTENSION plpgsql' => {
 		regexp => qr/^DROP EXTENSION plpgsql;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP FUNCTION dump_test.pltestlang_call_handler()' => {
 		regexp => qr/^DROP FUNCTION dump_test\.pltestlang_call_handler\(\);/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP LANGUAGE pltestlang' => {
 		regexp => qr/^DROP PROCEDURAL LANGUAGE pltestlang;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP SCHEMA dump_test' => {
 		regexp => qr/^DROP SCHEMA dump_test;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP SCHEMA dump_test_second_schema' => {
 		regexp => qr/^DROP SCHEMA dump_test_second_schema;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_table' => {
 		regexp => qr/^DROP TABLE dump_test\.test_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE fk_reference_test_table' => {
 		regexp => qr/^DROP TABLE dump_test\.fk_reference_test_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_second_table' => {
 		regexp => qr/^DROP TABLE dump_test\.test_second_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP TABLE test_third_table' => {
 		regexp => qr/^DROP TABLE dump_test_second_schema\.test_third_table;/m,
-		like   => { clean => 1, }, },
+		like   => { clean => 1, },
+	},
 
 	'DROP EXTENSION IF EXISTS plpgsql' => {
 		regexp => qr/^DROP EXTENSION IF EXISTS plpgsql;/m,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'DROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler()' => {
 		regexp => qr/^
 			\QDROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler();\E
 			/xm,
-		like => { clean_if_exists => 1, }, },
+		like => { clean_if_exists => 1, },
+	},
 
 	'DROP LANGUAGE IF EXISTS pltestlang' => {
 		regexp => qr/^DROP PROCEDURAL LANGUAGE IF EXISTS pltestlang;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP SCHEMA IF EXISTS dump_test' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS dump_test;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP SCHEMA IF EXISTS dump_test_second_schema' => {
 		regexp => qr/^DROP SCHEMA IF EXISTS dump_test_second_schema;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_table' => {
 		regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_table;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_second_table' => {
 		regexp => qr/^DROP TABLE IF EXISTS dump_test\.test_second_table;/m,
-		like   => { clean_if_exists => 1, }, },
+		like   => { clean_if_exists => 1, },
+	},
 
 	'DROP TABLE IF EXISTS test_third_table' => {
 		regexp => qr/^
 			\QDROP TABLE IF EXISTS dump_test_second_schema.test_third_table;\E
 			/xm,
-		like => { clean_if_exists => 1, }, },
+		like => { clean_if_exists => 1, },
+	},
 
 	'DROP ROLE regress_dump_test_role' => {
 		regexp => qr/^
 			\QDROP ROLE regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_globals_clean => 1, }, },
+		like => { pg_dumpall_globals_clean => 1, },
+	},
 
 	'DROP ROLE pg_' => {
 		regexp => qr/^
@@ -2405,7 +2737,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anywhere
-		like => {}, },
+		like => {},
+	},
 
 	'GRANT USAGE ON SCHEMA dump_test_second_schema' => {
 		create_order => 10,
@@ -2417,8 +2750,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON FOREIGN DATA WRAPPER dummy' => {
 		create_order => 85,
@@ -2428,7 +2763,8 @@ my %tests = (
 			\QGRANT ALL ON FOREIGN DATA WRAPPER dummy TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON FOREIGN SERVER s1' => {
 		create_order => 85,
@@ -2438,7 +2774,8 @@ my %tests = (
 			\QGRANT ALL ON FOREIGN SERVER s1 TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON DOMAIN dump_test.us_postal_code' => {
 		create_order => 72,
@@ -2451,7 +2788,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.int42' => {
 		create_order => 87,
@@ -2464,7 +2803,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.planets - ENUM' => {
 		create_order => 66,
@@ -2477,7 +2818,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT USAGE ON TYPE dump_test.textrange - RANGE' => {
 		create_order => 67,
@@ -2490,7 +2833,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT CREATE ON DATABASE dump_test' => {
 		create_order => 48,
@@ -2499,7 +2844,8 @@ my %tests = (
 		regexp => qr/^
 			\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE test_table' => {
 		create_order => 5,
@@ -2511,11 +2857,14 @@ my %tests = (
 			%full_runs,
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
-			section_pre_data     => 1, },
+			section_pre_data     => 1,
+		},
 		unlike => {
 			exclude_dump_test_schema => 1,
 			exclude_test_table       => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT SELECT ON TABLE test_third_table' => {
 		create_order => 19,
@@ -2527,8 +2876,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT ALL ON SEQUENCE test_third_table_col1_seq' => {
 		create_order => 28,
@@ -2541,8 +2892,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE measurement' => {
 		create_order => 91,
@@ -2555,7 +2908,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT SELECT ON TABLE measurement_y2006m2' => {
 		create_order => 92,
@@ -2567,8 +2922,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			role             => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT ALL ON LARGE OBJECT ...' => {
 		create_order => 60,
@@ -2587,12 +2944,15 @@ my %tests = (
 			column_inserts         => 1,
 			data_only              => 1,
 			section_pre_data       => 1,
-			test_schema_plus_blobs => 1, },
+			test_schema_plus_blobs => 1,
+		},
 		unlike => {
 			binary_upgrade => 1,
 			no_blobs       => 1,
 			no_privs       => 1,
-			schema_only    => 1, }, },
+			schema_only    => 1,
+		},
+	},
 
 	'GRANT INSERT(col1) ON TABLE test_second_table' => {
 		create_order => 8,
@@ -2606,7 +2966,9 @@ my %tests = (
 		  { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
 		unlike => {
 			exclude_dump_test_schema => 1,
-			no_privs                 => 1, }, },
+			no_privs                 => 1,
+		},
+	},
 
 	'GRANT EXECUTE ON FUNCTION pg_sleep() TO regress_dump_test_role' => {
 		create_order => 16,
@@ -2616,7 +2978,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION pg_catalog.pg_sleep(double precision) TO regress_dump_test_role;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT (proname ...) ON TABLE pg_proc TO public' => {
 		create_order => 46,
@@ -2684,7 +3047,8 @@ my %tests = (
 		\QGRANT SELECT(proconfig) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.*
 		\QGRANT SELECT(proacl) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E/xms,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT USAGE ON SCHEMA public TO public' => {
 		regexp => qr/^
@@ -2693,7 +3057,8 @@ my %tests = (
 			/xm,
 
 		# this shouldn't ever get emitted anymore
-		like => {}, },
+		like => {},
+	},
 
 	'REFRESH MATERIALIZED VIEW matview' => {
 		regexp => qr/^REFRESH MATERIALIZED VIEW dump_test.matview;/m,
@@ -2702,7 +3067,9 @@ my %tests = (
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	'REFRESH MATERIALIZED VIEW matview_second' => {
 		regexp => qr/^
@@ -2715,21 +3082,25 @@ my %tests = (
 		unlike => {
 			binary_upgrade           => 1,
 			exclude_dump_test_schema => 1,
-			schema_only              => 1, }, },
+			schema_only              => 1,
+		},
+	},
 
 	# FIXME
 	'REFRESH MATERIALIZED VIEW matview_third' => {
 		regexp => qr/^
 			\QREFRESH MATERIALIZED VIEW dump_test.matview_third;\E
 			/xms,
-		like => {}, },
+		like => {},
+	},
 
 	# FIXME
 	'REFRESH MATERIALIZED VIEW matview_fourth' => {
 		regexp => qr/^
 			\QREFRESH MATERIALIZED VIEW dump_test.matview_fourth;\E
 			/xms,
-		like => {}, },
+		like => {},
+	},
 
 	'REVOKE CONNECT ON DATABASE dump_test FROM public' => {
 		create_order => 49,
@@ -2739,7 +3110,8 @@ my %tests = (
 			\QGRANT TEMPORARY ON DATABASE dump_test TO PUBLIC;\E\n
 			\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
 			/xm,
-		like => { pg_dumpall_dbprivs => 1, }, },
+		like => { pg_dumpall_dbprivs => 1, },
+	},
 
 	'REVOKE EXECUTE ON FUNCTION pg_sleep() FROM public' => {
 		create_order => 15,
@@ -2749,7 +3121,8 @@ my %tests = (
 			\QREVOKE ALL ON FUNCTION pg_catalog.pg_sleep(double precision) FROM PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE SELECT ON TABLE pg_proc FROM public' => {
 		create_order => 45,
@@ -2757,7 +3130,8 @@ my %tests = (
 		regexp =>
 		  qr/^REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC;/m,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE CREATE ON SCHEMA public FROM public' => {
 		create_order => 16,
@@ -2767,7 +3141,8 @@ my %tests = (
 			\n\QGRANT USAGE ON SCHEMA public TO PUBLIC;\E
 			/xm,
 		like => { %full_runs, section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+		unlike => { no_privs => 1, },
+	},
 
 	'REVOKE USAGE ON LANGUAGE plpgsql FROM public' => {
 		create_order => 16,
@@ -2778,8 +3153,10 @@ my %tests = (
 			%dump_test_schema_runs,
 			only_dump_test_table => 1,
 			role                 => 1,
-			section_pre_data     => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data     => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 );
 
diff --git a/src/bin/pg_dump/t/010_dump_connstr.pl b/src/bin/pg_dump/t/010_dump_connstr.pl
index bf9bd52..c40b30f 100644
--- a/src/bin/pg_dump/t/010_dump_connstr.pl
+++ b/src/bin/pg_dump/t/010_dump_connstr.pl
@@ -34,9 +34,11 @@ $node->init(extra => [ '--locale=C', '--encoding=LATIN1' ]);
 
 # prep pg_hba.conf and pg_ident.conf
 $node->run_log(
-	[   $ENV{PG_REGRESS}, '--config-auth',
+	[
+		$ENV{PG_REGRESS}, '--config-auth',
 		$node->data_dir,  '--create-role',
-		"$dbname1,$dbname2,$dbname3,$dbname4" ]);
+		"$dbname1,$dbname2,$dbname3,$dbname4"
+	]);
 $node->start;
 
 my $backupdir = $node->backup_dir;
@@ -54,24 +56,32 @@ foreach my $dbname ($dbname1, $dbname2, $dbname3, $dbname4, 'CamelCase')
 # For these tests, pg_dumpall -r is used because it produces a short
 # dump.
 $node->command_ok(
-	[   'pg_dumpall', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname1),
-		'-U', $dbname4 ],
+		'-U', $dbname4
+	],
 	'pg_dumpall with long ASCII name 1');
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname2),
-		'-U', $dbname3 ],
+		'-U', $dbname3
+	],
 	'pg_dumpall with long ASCII name 2');
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname3),
-		'-U', $dbname2 ],
+		'-U', $dbname2
+	],
 	'pg_dumpall with long ASCII name 3');
 $node->command_ok(
-	[   'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
+	[
+		'pg_dumpall', '--no-sync', '-r', '-f', $discard, '--dbname',
 		$node->connstr($dbname4),
-		'-U', $dbname1 ],
+		'-U', $dbname1
+	],
 	'pg_dumpall with long ASCII name 4');
 $node->command_ok(
 	[ 'pg_dumpall', '--no-sync', '-r', '-l', 'dbname=template1' ],
@@ -91,8 +101,10 @@ $node->safe_psql($dbname1, 'CREATE TABLE t0()');
 
 # XXX no printed message when this fails, just SIGPIPE termination
 $node->command_ok(
-	[   'pg_dump', '-Fd', '--no-sync', '-j2', '-f', $dirfmt, '-U', $dbname1,
-		$node->connstr($dbname1) ],
+	[
+		'pg_dump', '-Fd', '--no-sync', '-j2', '-f', $dirfmt, '-U', $dbname1,
+		$node->connstr($dbname1)
+	],
 	'parallel dump');
 
 # recreate $dbname1 for restore test
@@ -106,9 +118,11 @@ $node->command_ok(
 $node->run_log([ 'dropdb', $dbname1 ]);
 
 $node->command_ok(
-	[   'pg_restore', '-C',  '-v', '-d',
+	[
+		'pg_restore', '-C',  '-v', '-d',
 		'template1',  '-j2', '-U', $dbname1,
-		$dirfmt ],
+		$dirfmt
+	],
 	'parallel restore with create');
 
 
@@ -127,9 +141,11 @@ my $envar_node = get_new_node('destination_envar');
 $envar_node->init(
 	extra => [ '-U', $bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);
 $envar_node->run_log(
-	[   $ENV{PG_REGRESS},      '--config-auth',
+	[
+		$ENV{PG_REGRESS},      '--config-auth',
 		$envar_node->data_dir, '--create-role',
-		"$bootstrap_super,$restore_super" ]);
+		"$bootstrap_super,$restore_super"
+	]);
 $envar_node->start;
 
 # make superuser for restore
@@ -157,16 +173,20 @@ my $cmdline_node = get_new_node('destination_cmdline');
 $cmdline_node->init(
 	extra => [ '-U', $bootstrap_super, '--locale=C', '--encoding=LATIN1' ]);
 $cmdline_node->run_log(
-	[   $ENV{PG_REGRESS},        '--config-auth',
+	[
+		$ENV{PG_REGRESS},        '--config-auth',
 		$cmdline_node->data_dir, '--create-role',
-		"$bootstrap_super,$restore_super" ]);
+		"$bootstrap_super,$restore_super"
+	]);
 $cmdline_node->start;
 $cmdline_node->run_log(
 	[ 'createuser', '-U', $bootstrap_super, '-s', $restore_super ]);
 {
 	$result = run_log(
-		[   'psql',         '-p', $cmdline_node->port, '-U',
-			$restore_super, '-X', '-f',                $plain ],
+		[
+			'psql',         '-p', $cmdline_node->port, '-U',
+			$restore_super, '-X', '-f',                $plain
+		],
 		'2>',
 		\$stderr);
 }
diff --git a/src/bin/pg_resetwal/t/002_corrupted.pl b/src/bin/pg_resetwal/t/002_corrupted.pl
index ab840d1..0022dcb 100644
--- a/src/bin/pg_resetwal/t/002_corrupted.pl
+++ b/src/bin/pg_resetwal/t/002_corrupted.pl
@@ -31,7 +31,8 @@ command_checks_all(
 	[ 'pg_resetwal', '-n', $node->data_dir ],
 	0,
 	[qr/pg_control version number/],
-	[   qr/pg_resetwal: pg_control exists but is broken or wrong version; ignoring it/
+	[
+		qr/pg_resetwal: pg_control exists but is broken or wrong version; ignoring it/
 	],
 	'processes corrupted pg_control all zeroes');
 
@@ -46,6 +47,7 @@ command_checks_all(
 	[ 'pg_resetwal', '-n', $node->data_dir ],
 	0,
 	[qr/pg_control version number/],
-	[   qr/\Qpg_resetwal: pg_control specifies invalid WAL segment size (0 bytes); proceed with caution\E/
+	[
+		qr/\Qpg_resetwal: pg_control specifies invalid WAL segment size (0 bytes); proceed with caution\E/
 	],
 	'processes zero WAL segment size');
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index 278ffd8..52531bb 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -92,7 +92,8 @@ sub check_query
 	my $result = run [
 		'psql', '-q', '-A', '-t', '--no-psqlrc', '-d',
 		$node_master->connstr('postgres'),
-		'-c', $query ],
+		'-c', $query
+	  ],
 	  '>', \$stdout, '2>', \$stderr;
 
 	# We don't use ok() for the exit code and stderr, because we want this
@@ -214,10 +215,12 @@ sub run_pg_rewind
 		# Stop the master and be ready to perform the rewind
 		$node_standby->stop;
 		command_ok(
-			[   'pg_rewind',
+			[
+				'pg_rewind',
 				"--debug",
 				"--source-pgdata=$standby_pgdata",
-				"--target-pgdata=$master_pgdata" ],
+				"--target-pgdata=$master_pgdata"
+			],
 			'pg_rewind local');
 	}
 	elsif ($test_mode eq "remote")
@@ -225,9 +228,11 @@ sub run_pg_rewind
 
 		# Do rewind using a remote connection as source
 		command_ok(
-			[   'pg_rewind',       "--debug",
+			[
+				'pg_rewind',       "--debug",
 				"--source-server", $standby_connstr,
-				"--target-pgdata=$master_pgdata" ],
+				"--target-pgdata=$master_pgdata"
+			],
 			'pg_rewind remote');
 	}
 	else
diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl
index 8f4f972..8b469cd 100644
--- a/src/bin/pg_rewind/t/003_extrafiles.pl
+++ b/src/bin/pg_rewind/t/003_extrafiles.pl
@@ -66,7 +66,8 @@ sub run_test
 	@paths = sort @paths;
 	is_deeply(
 		\@paths,
-		[   "$test_master_datadir/tst_both_dir",
+		[
+			"$test_master_datadir/tst_both_dir",
 			"$test_master_datadir/tst_both_dir/both_file1",
 			"$test_master_datadir/tst_both_dir/both_file2",
 			"$test_master_datadir/tst_both_dir/both_subdir",
diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl
index 947f13d..00fb04f 100644
--- a/src/bin/pgbench/t/001_pgbench_with_server.pl
+++ b/src/bin/pgbench/t/001_pgbench_with_server.pl
@@ -59,8 +59,10 @@ pgbench(
 	[qr{processed: 125/125}],
 	[qr{^$}],
 	'concurrency OID generation',
-	{   '001_pgbench_concurrent_oid_generation' =>
-		  'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);' });
+	{
+		'001_pgbench_concurrent_oid_generation' =>
+		  'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);'
+	});
 
 # cleanup
 $node->safe_psql('postgres', 'DROP TABLE oid_tbl;');
@@ -70,8 +72,10 @@ pgbench(
 	'no-such-database',
 	1,
 	[qr{^$}],
-	[   qr{connection to database "no-such-database" failed},
-		qr{FATAL:  database "no-such-database" does not exist} ],
+	[
+		qr{connection to database "no-such-database" failed},
+		qr{FATAL:  database "no-such-database" does not exist}
+	],
 	'no such database');
 
 pgbench(
@@ -83,8 +87,10 @@ pgbench(
 pgbench(
 	'-i', 0,
 	[qr{^$}],
-	[   qr{creating tables},       qr{vacuuming},
-		qr{creating primary keys}, qr{done\.} ],
+	[
+		qr{creating tables},       qr{vacuuming},
+		qr{creating primary keys}, qr{done\.}
+	],
 	'pgbench scale 1 initialization',);
 
 # Again, with all possible options
@@ -92,12 +98,14 @@ pgbench(
 	'--initialize --init-steps=dtpvg --scale=1 --unlogged-tables --fillfactor=98 --foreign-keys --quiet --tablespace=pg_default --index-tablespace=pg_default',
 	0,
 	[qr{^$}i],
-	[   qr{dropping old tables},
+	[
+		qr{dropping old tables},
 		qr{creating tables},
 		qr{vacuuming},
 		qr{creating primary keys},
 		qr{creating foreign keys},
-		qr{done\.} ],
+		qr{done\.}
+	],
 	'pgbench scale 1 initialization');
 
 # Test interaction of --init-steps with legacy step-selection options
@@ -105,12 +113,14 @@ pgbench(
 	'--initialize --init-steps=dtpvgvv --no-vacuum --foreign-keys --unlogged-tables',
 	0,
 	[qr{^$}],
-	[   qr{dropping old tables},
+	[
+		qr{dropping old tables},
 		qr{creating tables},
 		qr{creating primary keys},
 		qr{.* of .* tuples \(.*\) done},
 		qr{creating foreign keys},
-		qr{done\.} ],
+		qr{done\.}
+	],
 	'pgbench --init-steps');
 
 # Run all builtin scripts, for a few transactions each
@@ -118,34 +128,42 @@ pgbench(
 	'--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t'
 	  . ' --connect -n -v -n',
 	0,
-	[   qr{builtin: TPC-B},
+	[
+		qr{builtin: TPC-B},
 		qr{clients: 2\b},
 		qr{processed: 10/10},
-		qr{mode: simple} ],
+		qr{mode: simple}
+	],
 	[qr{^$}],
 	'pgbench tpcb-like');
 
 pgbench(
 	'--transactions=20 --client=5 -M extended --builtin=si -C --no-vacuum -s 1',
 	0,
-	[   qr{builtin: simple update},
+	[
+		qr{builtin: simple update},
 		qr{clients: 5\b},
 		qr{threads: 1\b},
 		qr{processed: 100/100},
-		qr{mode: extended} ],
+		qr{mode: extended}
+	],
 	[qr{scale option ignored}],
 	'pgbench simple update');
 
 pgbench(
 	'-t 100 -c 7 -M prepared -b se --debug',
 	0,
-	[   qr{builtin: select only},
+	[
+		qr{builtin: select only},
 		qr{clients: 7\b},
 		qr{threads: 1\b},
 		qr{processed: 700/700},
-		qr{mode: prepared} ],
-	[   qr{vacuum},    qr{client 0}, qr{client 1}, qr{sending},
-		qr{receiving}, qr{executing} ],
+		qr{mode: prepared}
+	],
+	[
+		qr{vacuum},    qr{client 0}, qr{client 1}, qr{sending},
+		qr{receiving}, qr{executing}
+	],
 	'pgbench select only');
 
 # check if threads are supported
@@ -161,16 +179,19 @@ my $nthreads = 2;
 pgbench(
 	"-t 100 -c 1 -j $nthreads -M prepared -n",
 	0,
-	[   qr{type: multiple scripts},
+	[
+		qr{type: multiple scripts},
 		qr{mode: prepared},
 		qr{script 1: .*/001_pgbench_custom_script_1},
 		qr{weight: 2},
 		qr{script 2: .*/001_pgbench_custom_script_2},
 		qr{weight: 1},
-		qr{processed: 100/100} ],
+		qr{processed: 100/100}
+	],
 	[qr{^$}],
 	'pgbench custom scripts',
-	{   '001_pgbench_custom_script_1@1' => q{-- select only
+	{
+		'001_pgbench_custom_script_1@1' => q{-- select only
 \set aid random(1, :scale * 100000)
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
@@ -182,41 +203,50 @@ BEGIN;
 -- cast are needed for typing under -M prepared
 SELECT :foo::INT + :scale::INT * :client_id::INT AS bla;
 COMMIT;
-} });
+}
+	});
 
 pgbench(
 	'-n -t 10 -c 1 -M simple',
 	0,
-	[   qr{type: .*/001_pgbench_custom_script_3},
+	[
+		qr{type: .*/001_pgbench_custom_script_3},
 		qr{processed: 10/10},
-		qr{mode: simple} ],
+		qr{mode: simple}
+	],
 	[qr{^$}],
 	'pgbench custom script',
-	{   '001_pgbench_custom_script_3' => q{-- select only variant
+	{
+		'001_pgbench_custom_script_3' => q{-- select only variant
 \set aid random(1, :scale * 100000)
 BEGIN;
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
   WHERE aid=:aid;
 COMMIT;
-} });
+}
+	});
 
 pgbench(
 	'-n -t 10 -c 2 -M extended',
 	0,
-	[   qr{type: .*/001_pgbench_custom_script_4},
+	[
+		qr{type: .*/001_pgbench_custom_script_4},
 		qr{processed: 20/20},
-		qr{mode: extended} ],
+		qr{mode: extended}
+	],
 	[qr{^$}],
 	'pgbench custom script',
-	{   '001_pgbench_custom_script_4' => q{-- select only variant
+	{
+		'001_pgbench_custom_script_4' => q{-- select only variant
 \set aid random(1, :scale * 100000)
 BEGIN;
 SELECT abalance::INTEGER AS balance
   FROM pgbench_accounts
   WHERE aid=:aid;
 COMMIT;
-} });
+}
+	});
 
 # test expressions
 # command 1..3 and 23 depend on random seed which is used to call srandom.
@@ -224,7 +254,8 @@ pgbench(
 	'--random-seed=5432 -t 1 -Dfoo=-10.1 -Dbla=false -Di=+3 -Dminint=-9223372036854775808 -Dn=null -Dt=t -Df=of -Dd=1.0',
 	0,
 	[ qr{type: .*/001_pgbench_expressions}, qr{processed: 1/1} ],
-	[   qr{setting random seed to 5432\b},
+	[
+		qr{setting random seed to 5432\b},
 
 		# After explicit seeding, the four * random checks (1-3,20) should be
 		# deterministic, but not necessarily portable.
@@ -289,7 +320,8 @@ pgbench(
 		qr{command=98.: int 5432\b},    # :random_seed
 	],
 	'pgbench expressions',
-	{   '001_pgbench_expressions' => q{-- integer functions
+	{
+		'001_pgbench_expressions' => q{-- integer functions
 \set i1 debug(random(10, 19))
 \set i2 debug(random_exponential(100, 199, 10.0))
 \set i3 debug(random_gaussian(1000, 1999, 10.0))
@@ -411,7 +443,8 @@ SELECT :v0, :v1, :v2, :v3;
 \set sc debug(:scale)
 \set ci debug(:client_id)
 \set rs debug(:random_seed)
-} });
+}
+	});
 
 # random determinism when seeded
 $node->safe_psql('postgres',
@@ -428,7 +461,8 @@ for my $i (1, 2)
 		[qr{processed: 1/1}],
 		[qr{setting random seed to $seed\b}],
 		"random seeded with $seed",
-		{   "001_pgbench_random_seed_$i" => q{-- test random functions
+		{
+			"001_pgbench_random_seed_$i" => q{-- test random functions
 \set ur random(1000, 1999)
 \set er random_exponential(2000, 2999, 2.0)
 \set gr random_gaussian(3000, 3999, 3.0)
@@ -438,7 +472,8 @@ INSERT INTO seeded_random(seed, rand, val) VALUES
   (:random_seed, 'exponential', :er),
   (:random_seed, 'gaussian', :gr),
   (:random_seed, 'zipfian', :zr);
-} });
+}
+		});
 }
 
 # check that all runs generated the same 4 values
@@ -462,12 +497,15 @@ $node->safe_psql('postgres', 'DROP TABLE seeded_random;');
 # backslash commands
 pgbench(
 	'-t 1', 0,
-	[   qr{type: .*/001_pgbench_backslash_commands},
+	[
+		qr{type: .*/001_pgbench_backslash_commands},
 		qr{processed: 1/1},
-		qr{shell-echo-output} ],
+		qr{shell-echo-output}
+	],
 	[qr{command=8.: int 2\b}],
 	'pgbench backslash commands',
-	{   '001_pgbench_backslash_commands' => q{-- run set
+	{
+		'001_pgbench_backslash_commands' => q{-- run set
 \set zero 0
 \set one 1.0
 -- sleep
@@ -482,36 +520,48 @@ pgbench(
 \set n debug(:two)
 -- shell
 \shell echo shell-echo-output
-} });
+}
+	});
 
 # trigger many expression errors
 my @errors = (
 
 	# [ test name, script number, status, stderr match ]
 	# SQL
-	[   'sql syntax error',
+	[
+		'sql syntax error',
 		0,
-		[   qr{ERROR:  syntax error},
-			qr{prepared statement .* does not exist} ],
+		[
+			qr{ERROR:  syntax error},
+			qr{prepared statement .* does not exist}
+		],
 		q{-- SQL syntax error
     SELECT 1 + ;
-} ],
-	[   'sql too many args', 1, [qr{statement has too many arguments.*\b9\b}],
+}
+	],
+	[
+		'sql too many args', 1, [qr{statement has too many arguments.*\b9\b}],
 		q{-- MAX_ARGS=10 for prepared
 \set i 0
 SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);
-} ],
+}
+	],
 
 	# SHELL
-	[   'shell bad command',                    0,
-		[qr{\(shell\) .* meta-command failed}], q{\shell no-such-command} ],
-	[   'shell undefined variable', 0,
+	[
+		'shell bad command',                    0,
+		[qr{\(shell\) .* meta-command failed}], q{\shell no-such-command}
+	],
+	[
+		'shell undefined variable', 0,
 		[qr{undefined variable ":nosuchvariable"}],
 		q{-- undefined variable in shell
 \shell echo ::foo :nosuchvariable
-} ],
+}
+	],
 	[ 'shell missing command', 1, [qr{missing command }], q{\shell} ],
-	[   'shell too many args', 1, [qr{too many arguments in command "shell"}],
+	[
+		'shell too many args', 1, [qr{too many arguments in command "shell"}],
 		q{-- 257 arguments to \shell
 \shell echo \
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
@@ -530,95 +580,154 @@ SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
  0 1 2 3 4 5 6 7 8 9 A B C D E F \
  0 1 2 3 4 5 6 7 8 9 A B C D E F
-} ],
+}
+	],
 
 	# SET
-	[   'set syntax error',                  1,
-		[qr{syntax error in command "set"}], q{\set i 1 +} ],
-	[   'set no such function',         1,
-		[qr{unexpected function name}], q{\set i noSuchFunction()} ],
-	[   'set invalid variable name', 0,
-		[qr{invalid variable name}], q{\set . 1} ],
-	[   'set int overflow',                   0,
-		[qr{double to int overflow for 100}], q{\set i int(1E32)} ],
+	[
+		'set syntax error',                  1,
+		[qr{syntax error in command "set"}], q{\set i 1 +}
+	],
+	[
+		'set no such function',         1,
+		[qr{unexpected function name}], q{\set i noSuchFunction()}
+	],
+	[
+		'set invalid variable name', 0,
+		[qr{invalid variable name}], q{\set . 1}
+	],
+	[
+		'set int overflow',                   0,
+		[qr{double to int overflow for 100}], q{\set i int(1E32)}
+	],
 	[ 'set division by zero', 0, [qr{division by zero}], q{\set i 1/0} ],
-	[   'set bigint out of range', 0,
-		[qr{bigint out of range}], q{\set i 9223372036854775808 / -1} ],
-	[   'set undefined variable',
+	[
+		'set bigint out of range', 0,
+		[qr{bigint out of range}], q{\set i 9223372036854775808 / -1}
+	],
+	[
+		'set undefined variable',
 		0,
 		[qr{undefined variable "nosuchvariable"}],
-		q{\set i :nosuchvariable} ],
+		q{\set i :nosuchvariable}
+	],
 	[ 'set unexpected char', 1, [qr{unexpected character .;.}], q{\set i ;} ],
-	[   'set too many args',
+	[
+		'set too many args',
 		0,
 		[qr{too many function arguments}],
-		q{\set i least(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)} ],
-	[   'set empty random range',          0,
-		[qr{empty range given to random}], q{\set i random(5,3)} ],
-	[   'set random range too large',
+		q{\set i least(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)}
+	],
+	[
+		'set empty random range',          0,
+		[qr{empty range given to random}], q{\set i random(5,3)}
+	],
+	[
+		'set random range too large',
 		0,
 		[qr{random range is too large}],
-		q{\set i random(-9223372036854775808, 9223372036854775807)} ],
-	[   'set gaussian param too small',
+		q{\set i random(-9223372036854775808, 9223372036854775807)}
+	],
+	[
+		'set gaussian param too small',
 		0,
 		[qr{gaussian param.* at least 2}],
-		q{\set i random_gaussian(0, 10, 1.0)} ],
-	[   'set exponential param greater 0',
+		q{\set i random_gaussian(0, 10, 1.0)}
+	],
+	[
+		'set exponential param greater 0',
 		0,
 		[qr{exponential parameter must be greater }],
-		q{\set i random_exponential(0, 10, 0.0)} ],
-	[   'set zipfian param to 1',
+		q{\set i random_exponential(0, 10, 0.0)}
+	],
+	[
+		'set zipfian param to 1',
 		0,
 		[qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}],
-		q{\set i random_zipfian(0, 10, 1)} ],
-	[   'set zipfian param too large',
+		q{\set i random_zipfian(0, 10, 1)}
+	],
+	[
+		'set zipfian param too large',
 		0,
 		[qr{zipfian parameter must be in range \(0, 1\) U \(1, \d+\]}],
-		q{\set i random_zipfian(0, 10, 1000000)} ],
-	[   'set non numeric value',                     0,
-		[qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1} ],
+		q{\set i random_zipfian(0, 10, 1000000)}
+	],
+	[
+		'set non numeric value',                     0,
+		[qr{malformed variable "foo" value: "bla"}], q{\set i :foo + 1}
+	],
 	[ 'set no expression',    1, [qr{syntax error}],      q{\set i} ],
 	[ 'set missing argument', 1, [qr{missing argument}i], q{\set} ],
-	[   'set not a bool',                      0,
-		[qr{cannot coerce double to boolean}], q{\set b NOT 0.0} ],
-	[   'set not an int',                   0,
-		[qr{cannot coerce boolean to int}], q{\set i TRUE + 2} ],
-	[   'set not a double',                    0,
-		[qr{cannot coerce boolean to double}], q{\set d ln(TRUE)} ],
-	[   'set case error',
+	[
+		'set not a bool',                      0,
+		[qr{cannot coerce double to boolean}], q{\set b NOT 0.0}
+	],
+	[
+		'set not an int',                   0,
+		[qr{cannot coerce boolean to int}], q{\set i TRUE + 2}
+	],
+	[
+		'set not a double',                    0,
+		[qr{cannot coerce boolean to double}], q{\set d ln(TRUE)}
+	],
+	[
+		'set case error',
 		1,
 		[qr{syntax error in command "set"}],
-		q{\set i CASE TRUE THEN 1 ELSE 0 END} ],
-	[   'set random error',                 0,
-		[qr{cannot coerce boolean to int}], q{\set b random(FALSE, TRUE)} ],
-	[   'set number of args mismatch',        1,
-		[qr{unexpected number of arguments}], q{\set d ln(1.0, 2.0))} ],
-	[   'set at least one arg',               1,
-		[qr{at least one argument expected}], q{\set i greatest())} ],
+		q{\set i CASE TRUE THEN 1 ELSE 0 END}
+	],
+	[
+		'set random error',                 0,
+		[qr{cannot coerce boolean to int}], q{\set b random(FALSE, TRUE)}
+	],
+	[
+		'set number of args mismatch',        1,
+		[qr{unexpected number of arguments}], q{\set d ln(1.0, 2.0))}
+	],
+	[
+		'set at least one arg',               1,
+		[qr{at least one argument expected}], q{\set i greatest())}
+	],
 
 	# SETSHELL
-	[   'setshell not an int',                0,
-		[qr{command must return an integer}], q{\setshell i echo -n one} ],
+	[
+		'setshell not an int',                0,
+		[qr{command must return an integer}], q{\setshell i echo -n one}
+	],
 	[ 'setshell missing arg', 1, [qr{missing argument }], q{\setshell var} ],
-	[   'setshell no such command',   0,
-		[qr{could not read result }], q{\setshell var no-such-command} ],
+	[
+		'setshell no such command',   0,
+		[qr{could not read result }], q{\setshell var no-such-command}
+	],
 
 	# SLEEP
-	[   'sleep undefined variable',      0,
-		[qr{sleep: undefined variable}], q{\sleep :nosuchvariable} ],
-	[   'sleep too many args',    1,
-		[qr{too many arguments}], q{\sleep too many args} ],
-	[   'sleep missing arg', 1,
-		[ qr{missing argument}, qr{\\sleep} ], q{\sleep} ],
-	[   'sleep unknown unit',         1,
-		[qr{unrecognized time unit}], q{\sleep 1 week} ],
+	[
+		'sleep undefined variable',      0,
+		[qr{sleep: undefined variable}], q{\sleep :nosuchvariable}
+	],
+	[
+		'sleep too many args',    1,
+		[qr{too many arguments}], q{\sleep too many args}
+	],
+	[
+		'sleep missing arg', 1,
+		[ qr{missing argument}, qr{\\sleep} ], q{\sleep}
+	],
+	[
+		'sleep unknown unit',         1,
+		[qr{unrecognized time unit}], q{\sleep 1 week}
+	],
 
 	# MISC
-	[   'misc invalid backslash command',         1,
-		[qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand} ],
+	[
+		'misc invalid backslash command',         1,
+		[qr{invalid command .* "nosuchcommand"}], q{\nosuchcommand}
+	],
 	[ 'misc empty script', 1, [qr{empty command list for script}], q{} ],
-	[   'bad boolean',                     0,
-		[qr{malformed variable.*trueXXX}], q{\set b :badtrue or true} ],);
+	[
+		'bad boolean',                     0,
+		[qr{malformed variable.*trueXXX}], q{\set b :badtrue or true}
+	],);
 
 
 for my $e (@errors)
@@ -641,7 +750,8 @@ pgbench(
 	[ qr{processed: 1/1}, qr{zipfian cache array overflowed 1 time\(s\)} ],
 	[qr{^}],
 	'pgbench zipfian array overflow on random_zipfian',
-	{   '001_pgbench_random_zipfian' => q{
+	{
+		'001_pgbench_random_zipfian' => q{
 \set i random_zipfian(1, 100, 0.5)
 \set i random_zipfian(2, 100, 0.5)
 \set i random_zipfian(3, 100, 0.5)
@@ -658,7 +768,8 @@ pgbench(
 \set i random_zipfian(14, 100, 0.5)
 \set i random_zipfian(15, 100, 0.5)
 \set i random_zipfian(16, 100, 0.5)
-} });
+}
+	});
 
 # throttling
 pgbench(
@@ -673,9 +784,11 @@ pgbench(
 	# given the expected rate and the 2 ms tx duration, at most one is executed
 	'-t 10 --rate=100000 --latency-limit=1 -n -r',
 	0,
-	[   qr{processed: [01]/10},
+	[
+		qr{processed: [01]/10},
 		qr{type: .*/001_pgbench_sleep},
-		qr{above the 1.0 ms latency limit: [01]/} ],
+		qr{above the 1.0 ms latency limit: [01]/}
+	],
 	[qr{^$}i],
 	'pgbench late throttling',
 	{ '001_pgbench_sleep' => q{\sleep 2ms} });
diff --git a/src/bin/pgbench/t/002_pgbench_no_server.pl b/src/bin/pgbench/t/002_pgbench_no_server.pl
index 7dcc812..a9e067b 100644
--- a/src/bin/pgbench/t/002_pgbench_no_server.pl
+++ b/src/bin/pgbench/t/002_pgbench_no_server.pl
@@ -57,81 +57,126 @@ sub pgbench_scripts
 my @options = (
 
 	# name, options, stderr checks
-	[   'bad option',
+	[
+		'bad option',
 		'-h home -p 5432 -U calvin -d --bad-option',
-		[ qr{(unrecognized|illegal) option}, qr{--help.*more information} ] ],
-	[   'no file',
+		[ qr{(unrecognized|illegal) option}, qr{--help.*more information} ]
+	],
+	[
+		'no file',
 		'-f no-such-file',
-		[qr{could not open file "no-such-file":}] ],
-	[   'no builtin',
+		[qr{could not open file "no-such-file":}]
+	],
+	[
+		'no builtin',
 		'-b no-such-builtin',
-		[qr{no builtin script .* "no-such-builtin"}] ],
-	[   'invalid weight',
+		[qr{no builtin script .* "no-such-builtin"}]
+	],
+	[
+		'invalid weight',
 		'--builtin=select-only@one',
-		[qr{invalid weight specification: \@one}] ],
-	[   'invalid weight',
+		[qr{invalid weight specification: \@one}]
+	],
+	[
+		'invalid weight',
 		'-b select-only@-1',
-		[qr{weight spec.* out of range .*: -1}] ],
+		[qr{weight spec.* out of range .*: -1}]
+	],
 	[ 'too many scripts', '-S ' x 129, [qr{at most 128 SQL scripts}] ],
 	[ 'bad #clients', '-c three', [qr{invalid number of clients: "three"}] ],
-	[   'bad #threads', '-j eleven', [qr{invalid number of threads: "eleven"}]
+	[
+		'bad #threads', '-j eleven', [qr{invalid number of threads: "eleven"}]
 	],
 	[ 'bad scale', '-i -s two', [qr{invalid scaling factor: "two"}] ],
-	[   'invalid #transactions',
+	[
+		'invalid #transactions',
 		'-t zil',
-		[qr{invalid number of transactions: "zil"}] ],
+		[qr{invalid number of transactions: "zil"}]
+	],
 	[ 'invalid duration', '-T ten', [qr{invalid duration: "ten"}] ],
-	[   '-t XOR -T',
+	[
+		'-t XOR -T',
 		'-N -l --aggregate-interval=5 --log-prefix=notused -t 1000 -T 1',
-		[qr{specify either }] ],
-	[   '-T XOR -t',
+		[qr{specify either }]
+	],
+	[
+		'-T XOR -t',
 		'-P 1 --progress-timestamp -l --sampling-rate=0.001 -T 10 -t 1000',
-		[qr{specify either }] ],
+		[qr{specify either }]
+	],
 	[ 'bad variable', '--define foobla', [qr{invalid variable definition}] ],
 	[ 'invalid fillfactor', '-F 1',            [qr{invalid fillfactor}] ],
 	[ 'invalid query mode', '-M no-such-mode', [qr{invalid query mode}] ],
-	[   'invalid progress', '--progress=0',
-		[qr{invalid thread progress delay}] ],
+	[
+		'invalid progress', '--progress=0',
+		[qr{invalid thread progress delay}]
+	],
 	[ 'invalid rate',    '--rate=0.0',          [qr{invalid rate limit}] ],
 	[ 'invalid latency', '--latency-limit=0.0', [qr{invalid latency limit}] ],
-	[   'invalid sampling rate', '--sampling-rate=0',
-		[qr{invalid sampling rate}] ],
-	[   'invalid aggregate interval', '--aggregate-interval=-3',
-		[qr{invalid .* seconds for}] ],
-	[   'weight zero',
+	[
+		'invalid sampling rate', '--sampling-rate=0',
+		[qr{invalid sampling rate}]
+	],
+	[
+		'invalid aggregate interval', '--aggregate-interval=-3',
+		[qr{invalid .* seconds for}]
+	],
+	[
+		'weight zero',
 		'-b se@0 -b si@0 -b tpcb@0',
-		[qr{weight must not be zero}] ],
+		[qr{weight must not be zero}]
+	],
 	[ 'init vs run', '-i -S',    [qr{cannot be used in initialization}] ],
 	[ 'run vs init', '-S -F 90', [qr{cannot be used in benchmarking}] ],
 	[ 'ambiguous builtin', '-b s', [qr{ambiguous}] ],
-	[   '--progress-timestamp => --progress', '--progress-timestamp',
-		[qr{allowed only under}] ],
-	[   '-I without init option',
+	[
+		'--progress-timestamp => --progress', '--progress-timestamp',
+		[qr{allowed only under}]
+	],
+	[
+		'-I without init option',
 		'-I dtg',
-		[qr{cannot be used in benchmarking mode}] ],
-	[   'invalid init step',
+		[qr{cannot be used in benchmarking mode}]
+	],
+	[
+		'invalid init step',
 		'-i -I dta',
-		[ qr{unrecognized initialization step}, qr{allowed steps are} ] ],
-	[   'bad random seed',
+		[ qr{unrecognized initialization step}, qr{allowed steps are} ]
+	],
+	[
+		'bad random seed',
 		'--random-seed=one',
-		[   qr{unrecognized random seed option "one": expecting an unsigned integer, "time" or "rand"},
-			qr{error while setting random seed from --random-seed option} ] ],
+		[
+			qr{unrecognized random seed option "one": expecting an unsigned integer, "time" or "rand"},
+			qr{error while setting random seed from --random-seed option}
+		]
+	],
 
 	# loging sub-options
-	[   'sampling => log', '--sampling-rate=0.01',
-		[qr{log sampling .* only when}] ],
-	[   'sampling XOR aggregate',
+	[
+		'sampling => log', '--sampling-rate=0.01',
+		[qr{log sampling .* only when}]
+	],
+	[
+		'sampling XOR aggregate',
 		'-l --sampling-rate=0.1 --aggregate-interval=3',
-		[qr{sampling .* aggregation .* cannot be used at the same time}] ],
-	[   'aggregate => log', '--aggregate-interval=3',
-		[qr{aggregation .* only when}] ],
+		[qr{sampling .* aggregation .* cannot be used at the same time}]
+	],
+	[
+		'aggregate => log', '--aggregate-interval=3',
+		[qr{aggregation .* only when}]
+	],
 	[ 'log-prefix => log', '--log-prefix=x', [qr{prefix .* only when}] ],
-	[   'duration & aggregation',
+	[
+		'duration & aggregation',
 		'-l -T 1 --aggregate-interval=3',
-		[qr{aggr.* not be higher}] ],
-	[   'duration % aggregation',
+		[qr{aggr.* not be higher}]
+	],
+	[
+		'duration % aggregation',
 		'-l -T 5 --aggregate-interval=3',
-		[qr{multiple}] ],);
+		[qr{multiple}]
+	],);
 
 for my $o (@options)
 {
@@ -143,11 +188,13 @@ for my $o (@options)
 # Help
 pgbench(
 	'--help', 0,
-	[   qr{benchmarking tool for PostgreSQL},
+	[
+		qr{benchmarking tool for PostgreSQL},
 		qr{Usage},
 		qr{Initialization options:},
 		qr{Common options:},
-		qr{Report bugs to} ],
+		qr{Report bugs to}
+	],
 	[qr{^$}],
 	'pgbench help');
 
@@ -159,43 +206,65 @@ pgbench(
 	'-b list',
 	0,
 	[qr{^$}],
-	[   qr{Available builtin scripts:}, qr{tpcb-like},
-		qr{simple-update},              qr{select-only} ],
+	[
+		qr{Available builtin scripts:}, qr{tpcb-like},
+		qr{simple-update},              qr{select-only}
+	],
 	'pgbench builtin list');
 
 my @script_tests = (
 
 	# name, err, { file => contents }
-	[   'missing endif',
+	[
+		'missing endif',
 		[qr{\\if without matching \\endif}],
-		{ 'if-noendif.sql' => '\if 1' } ],
-	[   'missing if on elif',
+		{ 'if-noendif.sql' => '\if 1' }
+	],
+	[
+		'missing if on elif',
 		[qr{\\elif without matching \\if}],
-		{ 'elif-noif.sql' => '\elif 1' } ],
-	[   'missing if on else',
+		{ 'elif-noif.sql' => '\elif 1' }
+	],
+	[
+		'missing if on else',
 		[qr{\\else without matching \\if}],
-		{ 'else-noif.sql' => '\else' } ],
-	[   'missing if on endif',
+		{ 'else-noif.sql' => '\else' }
+	],
+	[
+		'missing if on endif',
 		[qr{\\endif without matching \\if}],
-		{ 'endif-noif.sql' => '\endif' } ],
-	[   'elif after else',
+		{ 'endif-noif.sql' => '\endif' }
+	],
+	[
+		'elif after else',
 		[qr{\\elif after \\else}],
-		{ 'else-elif.sql' => "\\if 1\n\\else\n\\elif 0\n\\endif" } ],
-	[   'else after else',
+		{ 'else-elif.sql' => "\\if 1\n\\else\n\\elif 0\n\\endif" }
+	],
+	[
+		'else after else',
 		[qr{\\else after \\else}],
-		{ 'else-else.sql' => "\\if 1\n\\else\n\\else\n\\endif" } ],
-	[   'if syntax error',
+		{ 'else-else.sql' => "\\if 1\n\\else\n\\else\n\\endif" }
+	],
+	[
+		'if syntax error',
 		[qr{syntax error in command "if"}],
-		{ 'if-bad.sql' => "\\if\n\\endif\n" } ],
-	[   'elif syntax error',
+		{ 'if-bad.sql' => "\\if\n\\endif\n" }
+	],
+	[
+		'elif syntax error',
 		[qr{syntax error in command "elif"}],
-		{ 'elif-bad.sql' => "\\if 0\n\\elif +\n\\endif\n" } ],
-	[   'else syntax error',
+		{ 'elif-bad.sql' => "\\if 0\n\\elif +\n\\endif\n" }
+	],
+	[
+		'else syntax error',
 		[qr{unexpected argument in command "else"}],
-		{ 'else-bad.sql' => "\\if 0\n\\else BAD\n\\endif\n" } ],
-	[   'endif syntax error',
+		{ 'else-bad.sql' => "\\if 0\n\\else BAD\n\\endif\n" }
+	],
+	[
+		'endif syntax error',
 		[qr{unexpected argument in command "endif"}],
-		{ 'endif-bad.sql' => "\\if 0\n\\endif BAD\n" } ],);
+		{ 'endif-bad.sql' => "\\if 0\n\\endif BAD\n" }
+	],);
 
 for my $t (@script_tests)
 {
diff --git a/src/bin/psql/create_help.pl b/src/bin/psql/create_help.pl
index cb0e6e8..08ed032 100644
--- a/src/bin/psql/create_help.pl
+++ b/src/bin/psql/create_help.pl
@@ -149,7 +149,8 @@ foreach my $file (sort readdir DIR)
 				cmddesc     => $cmddesc,
 				cmdsynopsis => $cmdsynopsis,
 				params      => \@params,
-				nl_count    => $nl_count };
+				nl_count    => $nl_count
+			};
 			$maxlen =
 			  ($maxlen >= length $cmdname) ? $maxlen : length $cmdname;
 		}
diff --git a/src/test/kerberos/t/001_auth.pl b/src/test/kerberos/t/001_auth.pl
index ba90231..5e638eb 100644
--- a/src/test/kerberos/t/001_auth.pl
+++ b/src/test/kerberos/t/001_auth.pl
@@ -161,7 +161,8 @@ sub test_access
 		'SELECT 1',
 		extra_params => [
 			'-d', $node->connstr('postgres') . ' host=localhost',
-			'-U', $role ]);
+			'-U', $role
+		]);
 	is($res, $expected_res, $test_name);
 }
 
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index 10716ab..fb4ecf8 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -43,12 +43,16 @@ my %pgdump_runs = (
 		dump_cmd => [
 			'pg_dump',                            '--no-sync',
 			"--file=$tempdir/binary_upgrade.sql", '--schema-only',
-			'--binary-upgrade',                   '--dbname=postgres', ], },
+			'--binary-upgrade',                   '--dbname=postgres',
+		],
+	},
 	clean => {
 		dump_cmd => [
 			'pg_dump', "--file=$tempdir/clean.sql",
 			'-c',      '--no-sync',
-			'--dbname=postgres', ], },
+			'--dbname=postgres',
+		],
+	},
 	clean_if_exists => {
 		dump_cmd => [
 			'pg_dump',
@@ -57,7 +61,9 @@ my %pgdump_runs = (
 			'-c',
 			'--if-exists',
 			'--encoding=UTF8',    # no-op, just tests that option is accepted
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	createdb => {
 		dump_cmd => [
 			'pg_dump',
@@ -65,7 +71,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/createdb.sql",
 			'-C',
 			'-R',                 # no-op, just for testing
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	data_only => {
 		dump_cmd => [
 			'pg_dump',
@@ -73,7 +81,9 @@ my %pgdump_runs = (
 			"--file=$tempdir/data_only.sql",
 			'-a',
 			'-v',                 # no-op, just make sure it works
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	defaults => {
 		dump_cmd => [ 'pg_dump', '-f', "$tempdir/defaults.sql", 'postgres', ],
 	},
@@ -81,70 +91,96 @@ my %pgdump_runs = (
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fc', '-Z6',
-			"--file=$tempdir/defaults_custom_format.dump", 'postgres', ],
+			"--file=$tempdir/defaults_custom_format.dump", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_custom_format.sql",
-			"$tempdir/defaults_custom_format.dump", ], },
+			"$tempdir/defaults_custom_format.dump",
+		],
+	},
 	defaults_dir_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fd',
-			"--file=$tempdir/defaults_dir_format", 'postgres', ],
+			"--file=$tempdir/defaults_dir_format", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_dir_format.sql",
-			"$tempdir/defaults_dir_format", ], },
+			"$tempdir/defaults_dir_format",
+		],
+	},
 	defaults_parallel => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Fd', '-j2',
-			"--file=$tempdir/defaults_parallel", 'postgres', ],
+			"--file=$tempdir/defaults_parallel", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_parallel.sql",
-			"$tempdir/defaults_parallel", ], },
+			"$tempdir/defaults_parallel",
+		],
+	},
 	defaults_tar_format => {
 		test_key => 'defaults',
 		dump_cmd => [
 			'pg_dump', '--no-sync', '-Ft',
-			"--file=$tempdir/defaults_tar_format.tar", 'postgres', ],
+			"--file=$tempdir/defaults_tar_format.tar", 'postgres',
+		],
 		restore_cmd => [
 			'pg_restore',
 			"--file=$tempdir/defaults_tar_format.sql",
-			"$tempdir/defaults_tar_format.tar", ], },
+			"$tempdir/defaults_tar_format.tar",
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall',                             '--no-sync',
-			"--file=$tempdir/pg_dumpall_globals.sql", '-g', ], },
+			"--file=$tempdir/pg_dumpall_globals.sql", '-g',
+		],
+	},
 	no_privs => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_privs.sql", '-x',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	no_owner => {
 		dump_cmd => [
 			'pg_dump',                      '--no-sync',
 			"--file=$tempdir/no_owner.sql", '-O',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	schema_only => {
 		dump_cmd => [
 			'pg_dump', '--no-sync', "--file=$tempdir/schema_only.sql",
-			'-s', 'postgres', ], },
+			'-s', 'postgres',
+		],
+	},
 	section_pre_data => {
 		dump_cmd => [
 			'pg_dump',                              '--no-sync',
 			"--file=$tempdir/section_pre_data.sql", '--section=pre-data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_data => {
 		dump_cmd => [
 			'pg_dump',                          '--no-sync',
 			"--file=$tempdir/section_data.sql", '--section=data',
-			'postgres', ], },
+			'postgres',
+		],
+	},
 	section_post_data => {
 		dump_cmd => [
 			'pg_dump', '--no-sync', "--file=$tempdir/section_post_data.sql",
-			'--section=post-data', 'postgres', ], },);
+			'--section=post-data', 'postgres',
+		],
+	},);
 
 ###############################################################
 # Definition of the tests to run.
@@ -196,7 +232,8 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE EXTENSION test_pg_dump' => {
 		create_order => 2,
@@ -207,14 +244,17 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { binary_upgrade => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { binary_upgrade => 1, },
+	},
 
 	'CREATE ROLE regress_dump_test_role' => {
 		create_order => 1,
 		create_sql   => 'CREATE ROLE regress_dump_test_role;',
 		regexp       => qr/^CREATE ROLE regress_dump_test_role;\n/m,
-		like         => { pg_dumpall_globals => 1, }, },
+		like         => { pg_dumpall_globals => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_table_col1_seq' => {
 		regexp => qr/^
@@ -226,7 +266,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TABLE regress_pg_dump_table_added' => {
 		create_order => 7,
@@ -237,7 +278,8 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_seq' => {
 		regexp => qr/^
@@ -248,7 +290,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'SETVAL SEQUENCE regress_seq_dumpable' => {
 		create_order => 6,
@@ -259,7 +302,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			data_only    => 1,
-			section_data => 1, }, },
+			section_data => 1,
+		},
+	},
 
 	'CREATE TABLE regress_pg_dump_table' => {
 		regexp => qr/^
@@ -267,13 +312,15 @@ my %tests = (
 			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE ACCESS METHOD regress_test_am' => {
 		regexp => qr/^
 			\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'COMMENT ON EXTENSION test_pg_dump' => {
 		regexp => qr/^
@@ -283,7 +330,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, }, },
+			section_pre_data => 1,
+		},
+	},
 
 	'GRANT SELECT regress_pg_dump_table_added pre-ALTER EXTENSION' => {
 		create_order => 8,
@@ -292,7 +341,8 @@ my %tests = (
 		regexp => qr/^
 			\QGRANT SELECT ON TABLE public.regress_pg_dump_table_added TO regress_dump_test_role;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'REVOKE SELECT regress_pg_dump_table_added post-ALTER EXTENSION' => {
 		create_order => 10,
@@ -304,8 +354,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	'GRANT SELECT ON TABLE regress_pg_dump_table' => {
 		regexp => qr/^
@@ -313,7 +365,8 @@ my %tests = (
 			\QGRANT SELECT ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT(col1) ON regress_pg_dump_table' => {
 		regexp => qr/^
@@ -321,7 +374,8 @@ my %tests = (
 			\QGRANT SELECT(col1) ON TABLE public.regress_pg_dump_table TO PUBLIC;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT(col2) ON regress_pg_dump_table TO regress_dump_test_role'
 	  => {
@@ -334,8 +388,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	  },
 
 	'GRANT USAGE ON regress_pg_dump_table_col1_seq TO regress_dump_test_role'
 	  => {
@@ -348,14 +404,17 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	  },
 
 	'GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role' => {
 		regexp => qr/^
 			\QGRANT USAGE ON SEQUENCE public.regress_pg_dump_seq TO regress_dump_test_role;\E
 			\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'REVOKE SELECT(col1) ON regress_pg_dump_table' => {
 		create_order => 3,
@@ -367,8 +426,10 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, },
-		unlike => { no_privs => 1, }, },
+			section_pre_data => 1,
+		},
+		unlike => { no_privs => 1, },
+	},
 
 	# Objects included in extension part of a schema created by this extension */
 	'CREATE TABLE regress_pg_dump_schema.test_table' => {
@@ -377,7 +438,8 @@ my %tests = (
 			\n\s+\Qcol1 integer,\E
 			\n\s+\Qcol2 integer\E
 			\n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT SELECT ON regress_pg_dump_schema.test_table' => {
 		regexp => qr/^
@@ -385,7 +447,8 @@ my %tests = (
 			\QGRANT SELECT ON TABLE regress_pg_dump_schema.test_table TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE SEQUENCE regress_pg_dump_schema.test_seq' => {
 		regexp => qr/^
@@ -396,7 +459,8 @@ my %tests = (
                     \n\s+\QNO MAXVALUE\E
                     \n\s+\QCACHE 1;\E
                     \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT USAGE ON regress_pg_dump_schema.test_seq' => {
 		regexp => qr/^
@@ -404,14 +468,16 @@ my %tests = (
 			\QGRANT USAGE ON SEQUENCE regress_pg_dump_schema.test_seq TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE TYPE regress_pg_dump_schema.test_type' => {
 		regexp => qr/^
                     \QCREATE TYPE regress_pg_dump_schema.test_type AS (\E
                     \n\s+\Qcol1 integer\E
                     \n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT USAGE ON regress_pg_dump_schema.test_type' => {
 		regexp => qr/^
@@ -419,14 +485,16 @@ my %tests = (
 			\QGRANT ALL ON TYPE regress_pg_dump_schema.test_type TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE FUNCTION regress_pg_dump_schema.test_func' => {
 		regexp => qr/^
             \QCREATE FUNCTION regress_pg_dump_schema.test_func() RETURNS integer\E
             \n\s+\QLANGUAGE sql\E
             \n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT ALL ON regress_pg_dump_schema.test_func' => {
 		regexp => qr/^
@@ -434,7 +502,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'CREATE AGGREGATE regress_pg_dump_schema.test_agg' => {
 		regexp => qr/^
@@ -442,7 +511,8 @@ my %tests = (
             \n\s+\QSFUNC = int2_sum,\E
             \n\s+\QSTYPE = bigint\E
             \n\);\n/xm,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	'GRANT ALL ON regress_pg_dump_schema.test_agg' => {
 		regexp => qr/^
@@ -450,7 +520,8 @@ my %tests = (
 			\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_agg(smallint) TO regress_dump_test_role;\E\n
 			\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
 			\n/xms,
-		like => { binary_upgrade => 1, }, },
+		like => { binary_upgrade => 1, },
+	},
 
 	# Objects not included in extension, part of schema created by extension
 	'CREATE TABLE regress_pg_dump_schema.external_tab' => {
@@ -464,7 +535,9 @@ my %tests = (
 		like => {
 			%full_runs,
 			schema_only      => 1,
-			section_pre_data => 1, }, },);
+			section_pre_data => 1,
+		},
+	},);
 
 #########################################
 # Create a PG instance to test actually dumping from
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 53efb57..82a2611 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -155,7 +155,8 @@ sub new
 		_host    => $pghost,
 		_basedir => "$TestLib::tmp_check/t_${testname}_${name}_data",
 		_name    => $name,
-		_logfile => "$TestLib::log_path/${testname}_${name}.log" };
+		_logfile => "$TestLib::log_path/${testname}_${name}.log"
+	};
 
 	bless $self, $class;
 	mkdir $self->{_basedir}
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 355ef5f..c9f824b 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -256,7 +256,8 @@ sub check_mode_recursive
 	my $result = 1;
 
 	find(
-		{   follow_fast => 1,
+		{
+			follow_fast => 1,
 			wanted      => sub {
 				my $file_stat = stat($File::Find::name);
 
@@ -322,7 +323,8 @@ sub chmod_recursive
 	my ($dir, $dir_mode, $file_mode) = @_;
 
 	find(
-		{   follow_fast => 1,
+		{
+			follow_fast => 1,
 			wanted      => sub {
 				my $file_stat = stat($File::Find::name);
 
diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl
index ff1ea0e..e3a5fe9 100644
--- a/src/test/recovery/t/006_logical_decoding.pl
+++ b/src/test/recovery/t/006_logical_decoding.pl
@@ -112,8 +112,10 @@ SKIP:
 	skip "Test fails on Windows perl", 2 if $Config{osname} eq 'MSWin32';
 
 	my $pg_recvlogical = IPC::Run::start(
-		[   'pg_recvlogical', '-d', $node_master->connstr('otherdb'),
-			'-S', 'otherdb_slot', '-f', '-', '--start' ]);
+		[
+			'pg_recvlogical', '-d', $node_master->connstr('otherdb'),
+			'-S', 'otherdb_slot', '-f', '-', '--start'
+		]);
 	$node_master->poll_query_until('otherdb',
 		"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NOT NULL)"
 	) or die "slot never became active";
diff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl
index 6fe4786..5dc5241 100644
--- a/src/test/recovery/t/011_crash_recovery.pl
+++ b/src/test/recovery/t/011_crash_recovery.pl
@@ -29,8 +29,10 @@ my ($stdin, $stdout, $stderr) = ('', '', '');
 # an xact to be in-progress when we crash and we need to know
 # its xid.
 my $tx = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$stdin,
 	'>',
diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl
index d8ef22f..440ac09 100644
--- a/src/test/recovery/t/013_crash_restart.pl
+++ b/src/test/recovery/t/013_crash_restart.pl
@@ -39,8 +39,10 @@ $node->safe_psql(
 # Run psql, keeping session alive, so we have an alive backend to kill.
 my ($killme_stdin, $killme_stdout, $killme_stderr) = ('', '', '');
 my $killme = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$killme_stdin,
 	'>',
@@ -52,8 +54,10 @@ my $killme = IPC::Run::start(
 # Need a second psql to check if crash-restart happened.
 my ($monitor_stdin, $monitor_stdout, $monitor_stderr) = ('', '', '');
 my $monitor = IPC::Run::start(
-	[   'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
-		$node->connstr('postgres') ],
+	[
+		'psql', '-X', '-qAt', '-v', 'ON_ERROR_STOP=1', '-f', '-', '-d',
+		$node->connstr('postgres')
+	],
 	'<',
 	\$monitor_stdin,
 	'>',
diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm
index 5ca9e0d..ced279c 100644
--- a/src/test/ssl/ServerSetup.pm
+++ b/src/test/ssl/ServerSetup.pm
@@ -43,7 +43,8 @@ sub test_connect_ok
 	my $cmd = [
 		'psql', '-X', '-A', '-t', '-c',
 		"SELECT \$\$connected with $connstr\$\$",
-		'-d', "$common_connstr $connstr" ];
+		'-d', "$common_connstr $connstr"
+	];
 
 	command_ok($cmd, $test_name);
 }
@@ -55,7 +56,8 @@ sub test_connect_fails
 	my $cmd = [
 		'psql', '-X', '-A', '-t', '-c',
 		"SELECT \$\$connected with $connstr\$\$",
-		'-d', "$common_connstr $connstr" ];
+		'-d', "$common_connstr $connstr"
+	];
 
 	command_fails_like($cmd, $expected_stderr, $test_name);
 }
diff --git a/src/tools/git_changelog b/src/tools/git_changelog
index 352dc1c..1262bc1 100755
--- a/src/tools/git_changelog
+++ b/src/tools/git_changelog
@@ -317,7 +317,8 @@ sub push_commit
 			'message'   => $c->{'message'},
 			'commit'    => $c->{'commit'},
 			'commits'   => [],
-			'timestamp' => $ts };
+			'timestamp' => $ts
+		};
 		push @{ $all_commits{$ht} }, $cc;
 	}
 
@@ -326,7 +327,8 @@ sub push_commit
 		'branch'   => $c->{'branch'},
 		'commit'   => $c->{'commit'},
 		'date'     => $c->{'date'},
-		'last_tag' => $c->{'last_tag'} };
+		'last_tag' => $c->{'last_tag'}
+	};
 	push @{ $cc->{'commits'} }, $smallc;
 	push @{ $all_commits_by_branch{ $c->{'branch'} } }, $cc;
 	$cc->{'branch_position'}{ $c->{'branch'} } =
diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm
index 064ea2f..67124bb 100644
--- a/src/tools/msvc/Install.pm
+++ b/src/tools/msvc/Install.pm
@@ -95,7 +95,8 @@ sub Install
 	my @top_dir      = ("src");
 	@top_dir = ("src\\bin", "src\\interfaces") if ($insttype eq "client");
 	File::Find::find(
-		{   wanted => sub {
+		{
+			wanted => sub {
 				/^.*\.sample\z/s
 				  && push(@$sample_files, $File::Find::name);
 
@@ -155,7 +156,8 @@ sub Install
 		push @pldirs, "src/pl/plpython" if $config->{python};
 		push @pldirs, "src/pl/tcl"      if $config->{tcl};
 		File::Find::find(
-			{   wanted => sub {
+			{
+				wanted => sub {
 					/^(.*--.*\.sql|.*\.control)\z/s
 					  && push(@$pl_extension_files, $File::Find::name);
 
@@ -686,7 +688,8 @@ sub GenerateNLSFiles
 	EnsureDirectories($target, "share/locale");
 	my @flist;
 	File::Find::find(
-		{   wanted => sub {
+		{
+			wanted => sub {
 				/^nls\.mk\z/s
 				  && !push(@flist, $File::Find::name);
 			}
diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm
index ca6e8e5..2726d60 100644
--- a/src/tools/msvc/MSBuildProject.pm
+++ b/src/tools/msvc/MSBuildProject.pm
@@ -65,17 +65,21 @@ EOF
 
 	$self->WriteItemDefinitionGroup(
 		$f, 'Debug',
-		{   defs    => "_DEBUG;DEBUG=1",
+		{
+			defs    => "_DEBUG;DEBUG=1",
 			opt     => 'Disabled',
 			strpool => 'false',
-			runtime => 'MultiThreadedDebugDLL' });
+			runtime => 'MultiThreadedDebugDLL'
+		});
 	$self->WriteItemDefinitionGroup(
 		$f,
 		'Release',
-		{   defs    => "",
+		{
+			defs    => "",
 			opt     => 'Full',
 			strpool => 'true',
-			runtime => 'MultiThreadedDLL' });
+			runtime => 'MultiThreadedDLL'
+		});
 }
 
 sub AddDefine
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index b2f5fd6..593732f 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -39,7 +39,8 @@ my $contrib_extralibs      = undef;
 my $contrib_extraincludes = { 'dblink' => ['src/backend'] };
 my $contrib_extrasource = {
 	'cube' => [ 'contrib/cube/cubescan.l', 'contrib/cube/cubeparse.y' ],
-	'seg'  => [ 'contrib/seg/segscan.l',   'contrib/seg/segparse.y' ], };
+	'seg'  => [ 'contrib/seg/segscan.l',   'contrib/seg/segparse.y' ],
+};
 my @contrib_excludes = (
 	'commit_ts',       'hstore_plperl',
 	'hstore_plpython', 'intagg',
@@ -64,14 +65,17 @@ my $frontend_extralibs = {
 	'initdb'     => ['ws2_32.lib'],
 	'pg_restore' => ['ws2_32.lib'],
 	'pgbench'    => ['ws2_32.lib'],
-	'psql'       => ['ws2_32.lib'] };
+	'psql'       => ['ws2_32.lib']
+};
 my $frontend_extraincludes = {
 	'initdb' => ['src/timezone'],
-	'psql'   => ['src/backend'] };
+	'psql'   => ['src/backend']
+};
 my $frontend_extrasource = {
 	'psql' => ['src/bin/psql/psqlscanslash.l'],
 	'pgbench' =>
-	  [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ] };
+	  [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ]
+};
 my @frontend_excludes = (
 	'pgevent',    'pg_basebackup', 'pg_rewind', 'pg_dump',
 	'pg_waldump', 'scripts');
diff --git a/src/tools/msvc/Project.pm b/src/tools/msvc/Project.pm
index 3e08ce9..46c680d 100644
--- a/src/tools/msvc/Project.pm
+++ b/src/tools/msvc/Project.pm
@@ -16,7 +16,8 @@ sub _new
 	my $good_types = {
 		lib => 1,
 		exe => 1,
-		dll => 1, };
+		dll => 1,
+	};
 	confess("Bad project type: $type\n") unless exists $good_types->{$type};
 	my $self = {
 		name                  => $name,
@@ -32,7 +33,8 @@ sub _new
 		solution              => $solution,
 		disablewarnings       => '4018;4244;4273;4102;4090;4267',
 		disablelinkerwarnings => '',
-		platform              => $solution->{platform}, };
+		platform              => $solution->{platform},
+	};
 
 	bless($self, $classname);
 	return $self;
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index 55566bf..4ad1f8f 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -22,7 +22,8 @@ sub _new
 		VisualStudioVersion        => undef,
 		MinimumVisualStudioVersion => undef,
 		vcver                      => undef,
-		platform                   => undef, };
+		platform                   => undef,
+	};
 	bless($self, $classname);
 
 	$self->DeterminePlatform();
diff --git a/src/tools/msvc/VCBuildProject.pm b/src/tools/msvc/VCBuildProject.pm
index d3a03c5..57b8525 100644
--- a/src/tools/msvc/VCBuildProject.pm
+++ b/src/tools/msvc/VCBuildProject.pm
@@ -35,19 +35,23 @@ EOF
 
 	$self->WriteConfiguration(
 		$f, 'Debug',
-		{   defs     => "_DEBUG;DEBUG=1",
+		{
+			defs     => "_DEBUG;DEBUG=1",
 			wholeopt => 0,
 			opt      => 0,
 			strpool  => 'false',
-			runtime  => 3 });
+			runtime  => 3
+		});
 	$self->WriteConfiguration(
 		$f,
 		'Release',
-		{   defs     => "",
+		{
+			defs     => "",
 			wholeopt => 0,
 			opt      => 3,
 			strpool  => 'true',
-			runtime  => 2 });
+			runtime  => 2
+		});
 	print $f <<EOF;
  </Configurations>
 EOF
diff --git a/src/tools/pgindent/perltidyrc b/src/tools/pgindent/perltidyrc
index 29baef7..f34ae52 100644
--- a/src/tools/pgindent/perltidyrc
+++ b/src/tools/pgindent/perltidyrc
@@ -11,5 +11,5 @@
 --opening-brace-on-new-line
 --output-line-ending=unix
 --paren-tightness=2
---vertical-tightness=2
---vertical-tightness-closing=2
+--paren-vertical-tightness=2
+--paren-vertical-tightness-closing=2
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index ce0f43f..fc616e7 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -389,7 +389,8 @@ sub build_clean
 
 # get the list of files under code base, if it's set
 File::Find::find(
-	{   wanted => sub {
+	{
+		wanted => sub {
 			my ($dev, $ino, $mode, $nlink, $uid, $gid);
 			(($dev, $ino, $mode, $nlink, $uid, $gid) = lstat($_))
 			  && -f _
diff --git a/src/tools/win32tzlist.pl b/src/tools/win32tzlist.pl
index 4610d43..0fb561b 100755
--- a/src/tools/win32tzlist.pl
+++ b/src/tools/win32tzlist.pl
@@ -47,9 +47,11 @@ foreach my $keyname (@subkeys)
 	die "Incomplete timezone data for $keyname!\n"
 	  unless ($vals{Std} && $vals{Dlt} && $vals{Display});
 	push @system_zones,
-	  { 'std'     => $vals{Std}->[2],
+	  {
+		'std'     => $vals{Std}->[2],
 		'dlt'     => $vals{Dlt}->[2],
-		'display' => clean_displayname($vals{Display}->[2]), };
+		'display' => clean_displayname($vals{Display}->[2]),
+	  };
 }
 
 $basekey->Close();
@@ -75,10 +77,12 @@ while ($pgtz =~
 	m/{\s+"([^"]+)",\s+"([^"]+)",\s+"([^"]+)",?\s+},\s+\/\*(.+?)\*\//gs)
 {
 	push @file_zones,
-	  { 'std'     => $1,
+	  {
+		'std'     => $1,
 		'dlt'     => $2,
 		'match'   => $3,
-		'display' => clean_displayname($4), };
+		'display' => clean_displayname($4),
+	  };
 }
 
 #
#14Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andrew Dunstan (#13)
Re: perlcritic and perltidy

Andrew Dunstan wrote:

Yes. there are separate settings for the three types of brackets. Here's
what happens if we restrict the vertical tightness settings to parentheses.

I think that's an unambiguous improvement.

LGTM.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#15Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Alvaro Herrera (#14)
Re: perlcritic and perltidy

On 5/8/18 11:39, Alvaro Herrera wrote:

Andrew Dunstan wrote:

Yes. there are separate settings for the three types of brackets. Here's
what happens if we restrict the vertical tightness settings to parentheses.

I think that's an unambiguous improvement.

LGTM.

Yes, that looks better.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#16Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Andrew Dunstan (#12)
1 attachment(s)
Re: perlcritic and perltidy

On 05/08/2018 10:06 AM, Andrew Dunstan wrote:

{�������� find . -type f -a \( -name
'*.pl' -o -name '*.pm' \) -print;�������� find . -type f -perm -100
-exec file {} \; -print��������������� | egrep -i
':.*perl[0-9]*\>'��������������� | cut -d: -f1;���� }���� | sort -u� |
xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas

Here's a diff of all the places it found fixed. At this stage I don't
think it's worth it. If someone wants to write a perlcritic policy that
identifies missing trailing commas reasonably comprehensively, we can
look again. Otherwise we should just clean them up as we come across them.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

missing-comma.difftext/x-patch; name=missing-comma.diffDownload
diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index f387c86..ac19682 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -34,7 +34,7 @@ sub ParseHeader
 		'Oid'           => 'oid',
 		'NameData'      => 'name',
 		'TransactionId' => 'xid',
-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn',);
 
 	my %catalog;
 	my $declaring_attributes = 0;
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
index fb61db0..0a7d433 100644
--- a/src/backend/catalog/genbki.pl
+++ b/src/backend/catalog/genbki.pl
@@ -245,7 +245,7 @@ my %lookup_kind = (
 	pg_operator => \%operoids,
 	pg_opfamily => \%opfoids,
 	pg_proc     => \%procoids,
-	pg_type     => \%typeoids);
+	pg_type     => \%typeoids,);
 
 
 # Open temp files
@@ -631,7 +631,7 @@ sub gen_pg_attribute
 				{ name => 'cmin',     type => 'cid' },
 				{ name => 'xmax',     type => 'xid' },
 				{ name => 'cmax',     type => 'cid' },
-				{ name => 'tableoid', type => 'oid' });
+				{ name => 'tableoid', type => 'oid' },);
 			foreach my $attr (@SYS_ATTRS)
 			{
 				$attnum--;
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
index a50f6f3..6d40d68 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
@@ -21,7 +21,7 @@ my $mapping = read_source("CP932.TXT");
 my @reject_sjis = (
 	0xed40 .. 0xeefc, 0x8754 .. 0x875d, 0x878a, 0x8782,
 	0x8784,           0xfa5b,           0xfa54, 0x8790 .. 0x8792,
-	0x8795 .. 0x8797, 0x879a .. 0x879c);
+	0x8795 .. 0x8797, 0x879a .. 0x879c,);
 
 foreach my $i (@$mapping)
 {
diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl
index 4453449..26fd15d 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_most.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl
@@ -47,7 +47,7 @@ my %filename = (
 	'ISO8859_16' => '8859-16.TXT',
 	'KOI8R'      => 'KOI8-R.TXT',
 	'KOI8U'      => 'KOI8-U.TXT',
-	'GBK'        => 'CP936.TXT');
+	'GBK'        => 'CP936.TXT',);
 
 # make maps for all encodings if not specified
 my @charsets = (scalar(@ARGV) > 0) ? @ARGV : sort keys(%filename);
diff --git a/src/interfaces/ecpg/preproc/check_rules.pl b/src/interfaces/ecpg/preproc/check_rules.pl
index 6c8b004..566de5d 100644
--- a/src/interfaces/ecpg/preproc/check_rules.pl
+++ b/src/interfaces/ecpg/preproc/check_rules.pl
@@ -43,7 +43,7 @@ my %replace_line = (
 	  => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
 
 	'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
-	  'PREPARE prepared_name prep_type_clause AS PreparableStmt');
+	  'PREPARE prepared_name prep_type_clause AS PreparableStmt',);
 
 my $block        = '';
 my $yaccmode     = 0;
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 53efb57..6e3a62f 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1470,7 +1470,7 @@ sub lsn
 		'flush'   => 'pg_current_wal_flush_lsn()',
 		'write'   => 'pg_current_wal_lsn()',
 		'receive' => 'pg_last_wal_receive_lsn()',
-		'replay'  => 'pg_last_wal_replay_lsn()');
+		'replay'  => 'pg_last_wal_replay_lsn()',);
 
 	$mode = '<undef>' if !defined($mode);
 	croak "unknown mode for 'lsn': '$mode', valid modes are "
@@ -1657,7 +1657,7 @@ sub slot
 	my @columns = (
 		'plugin', 'slot_type',  'datoid', 'database',
 		'active', 'active_pid', 'xmin',   'catalog_xmin',
-		'restart_lsn');
+		'restart_lsn',);
 	return $self->query_hash(
 		'postgres',
 		"SELECT __COLUMNS__ FROM pg_catalog.pg_replication_slots WHERE slot_name = '$slot_name'",
@@ -1696,7 +1696,7 @@ sub pg_recvlogical_upto
 
 	my @cmd = (
 		'pg_recvlogical', '-S', $slot_name, '--dbname',
-		$self->connstr($dbname));
+		$self->connstr($dbname),);
 	push @cmd, '--endpos', $endpos;
 	push @cmd, '-f', '-', '--no-loop', '--start';
 
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index 824fa4d..dec17d0 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -119,19 +119,19 @@ test_recovery_standby('LSN', 'standby_5', $node_master, \@recovery_params,
 @recovery_params = (
 	"recovery_target_name = '$recovery_name'",
 	"recovery_target_xid  = '$recovery_txid'",
-	"recovery_target_time = '$recovery_time'");
+	"recovery_target_time = '$recovery_time'",);
 test_recovery_standby('name + XID + time',
 	'standby_6', $node_master, \@recovery_params, "3000", $lsn3);
 @recovery_params = (
 	"recovery_target_time = '$recovery_time'",
 	"recovery_target_name = '$recovery_name'",
-	"recovery_target_xid  = '$recovery_txid'");
+	"recovery_target_xid  = '$recovery_txid'",);
 test_recovery_standby('time + name + XID',
 	'standby_7', $node_master, \@recovery_params, "2000", $lsn2);
 @recovery_params = (
 	"recovery_target_xid  = '$recovery_txid'",
 	"recovery_target_time = '$recovery_time'",
-	"recovery_target_name = '$recovery_name'");
+	"recovery_target_name = '$recovery_name'",);
 test_recovery_standby('XID + time + name',
 	'standby_8', $node_master, \@recovery_params, "4000", $lsn4);
 @recovery_params = (
diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm
index 884c330..dc2d6ec 100644
--- a/src/tools/msvc/Install.pm
+++ b/src/tools/msvc/Install.pm
@@ -25,7 +25,7 @@ my @client_program_files = (
 	'libpgtypes',     'libpq',      'pg_basebackup', 'pg_config',
 	'pg_dump',        'pg_dumpall', 'pg_isready',    'pg_receivewal',
 	'pg_recvlogical', 'pg_restore', 'psql',          'reindexdb',
-	'vacuumdb',       @client_contribs);
+	'vacuumdb',       @client_contribs,);
 
 sub lcopy
 {
@@ -80,7 +80,7 @@ sub Install
 	my @client_dirs = ('bin', 'lib', 'share', 'symbols');
 	my @all_dirs = (
 		@client_dirs, 'doc', 'doc/contrib', 'doc/extension', 'share/contrib',
-		'share/extension', 'share/timezonesets', 'share/tsearch_data');
+		'share/extension', 'share/timezonesets', 'share/tsearch_data',);
 	if ($insttype eq "client")
 	{
 		EnsureDirectories($target, @client_dirs);
@@ -652,7 +652,7 @@ sub CopyIncludeFiles
 		EnsureDirectories("$target/include/server/$d");
 		my @args = (
 			'xcopy', '/s', '/i', '/q', '/r', '/y', "src\\include\\$d\\*.h",
-			"$ctarget\\include\\server\\$d\\");
+			"$ctarget\\include\\server\\$d\\",);
 		system(@args) && croak("Failed to copy include directory $d\n");
 	}
 	closedir($D);
@@ -710,7 +710,7 @@ sub GenerateNLSFiles
 				"$nlspath\\bin\\msgfmt",
 				'-o',
 				"$target\\share\\locale\\$lang\\LC_MESSAGES\\$prgm-$majorver.mo",
-				$_);
+				$_,);
 			system(@args) && croak("Could not run msgfmt on $dir\\$_");
 			print ".";
 		}
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index b2f5fd6..7551fe4 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -47,7 +47,7 @@ my @contrib_excludes = (
 	'ltree_plpython',  'pgcrypto',
 	'sepgsql',         'brin',
 	'test_extensions', 'test_pg_dump',
-	'snapshot_too_old');
+	'snapshot_too_old',);
 
 # Set of variables for frontend modules
 my $frontend_defines = { 'initdb' => 'FRONTEND' };
@@ -55,11 +55,11 @@ my @frontend_uselibpq = ('pg_ctl', 'pg_upgrade', 'pgbench', 'psql', 'initdb');
 my @frontend_uselibpgport = (
 	'pg_archivecleanup', 'pg_test_fsync',
 	'pg_test_timing',    'pg_upgrade',
-	'pg_waldump',        'pgbench');
+	'pg_waldump',        'pgbench',);
 my @frontend_uselibpgcommon = (
 	'pg_archivecleanup', 'pg_test_fsync',
 	'pg_test_timing',    'pg_upgrade',
-	'pg_waldump',        'pgbench');
+	'pg_waldump',        'pgbench',);
 my $frontend_extralibs = {
 	'initdb'     => ['ws2_32.lib'],
 	'pg_restore' => ['ws2_32.lib'],
@@ -74,7 +74,7 @@ my $frontend_extrasource = {
 	  [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ] };
 my @frontend_excludes = (
 	'pgevent',    'pg_basebackup', 'pg_rewind', 'pg_dump',
-	'pg_waldump', 'scripts');
+	'pg_waldump', 'scripts',);
 
 sub mkvcbuild
 {
@@ -626,7 +626,7 @@ sub mkvcbuild
 					(map { "-D$_" } @perl_embed_ccflags, $define || ()),
 					$source_file,
 					'/link',
-					$perl_libs[0]);
+					$perl_libs[0],);
 				my $compile_output = `@cmd 2>&1`;
 				-f $exe || die "Failed to build Perl test:\n$compile_output";
 
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 3a88638..3ce8512 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -106,7 +106,7 @@ sub installcheck
 		"--schedule=${schedule}_schedule",
 		"--max-concurrent-tests=20",
 		"--encoding=SQL_ASCII",
-		"--no-locale");
+		"--no-locale",);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	my $status = $? >> 8;
@@ -126,7 +126,7 @@ sub check
 		"--max-concurrent-tests=20",
 		"--encoding=SQL_ASCII",
 		"--no-locale",
-		"--temp-instance=./tmp_check");
+		"--temp-instance=./tmp_check",);
 	push(@args, $maxconn)     if $maxconn;
 	push(@args, $temp_config) if $temp_config;
 	system(@args);
@@ -152,7 +152,7 @@ sub ecpgcheck
 		"--schedule=${schedule}_schedule",
 		"--encoding=SQL_ASCII",
 		"--no-locale",
-		"--temp-instance=./tmp_chk");
+		"--temp-instance=./tmp_chk",);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	$status = $? >> 8;
@@ -168,7 +168,7 @@ sub isolationcheck
 		"../../../$Config/pg_isolation_regress/pg_isolation_regress",
 		"--bindir=../../../$Config/psql",
 		"--inputdir=.",
-		"--schedule=./isolation_schedule");
+		"--schedule=./isolation_schedule",);
 	push(@args, $maxconn) if $maxconn;
 	system(@args);
 	my $status = $? >> 8;
@@ -352,7 +352,7 @@ sub plcheck
 		my @args = (
 			"$topdir/$Config/pg_regress/pg_regress",
 			"--bindir=$topdir/$Config/psql",
-			"--dbname=pl_regression", @lang_args, @tests);
+			"--dbname=pl_regression", @lang_args, @tests,);
 		system(@args);
 		my $status = $? >> 8;
 		exit $status if $status;
@@ -404,7 +404,7 @@ sub subdircheck
 	my @args = (
 		"$topdir/$Config/pg_regress/pg_regress",
 		"--bindir=${topdir}/${Config}/psql",
-		"--dbname=contrib_regression", @opts, @tests);
+		"--dbname=contrib_regression", @opts, @tests,);
 	print join(' ',@args),"\n";
 	system(@args);
 	chdir "..";
@@ -553,7 +553,7 @@ sub upgradecheck
 	print "\nRunning pg_upgrade\n\n";
 	@args = (
 		'pg_upgrade', '-d', "$data.old", '-D', $data, '-b',
-		$bindir,      '-B', $bindir);
+		$bindir,      '-B', $bindir,);
 	system(@args) == 0 or exit 1;
 	print "\nStarting new cluster\n\n";
 	@args = ('pg_ctl', '-l', "$logdir/postmaster2.log", 'start');
#17Stephen Frost
sfrost@snowman.net
In reply to: Andrew Dunstan (#16)
Re: perlcritic and perltidy

Greetings,

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 10:06 AM, Andrew Dunstan wrote:

{         find . -type f -a \( -name
'*.pl' -o -name '*.pm' \) -print;         find . -type f -perm -100
-exec file {} \; -print                | egrep -i
':.*perl[0-9]*\>'                | cut -d: -f1;     }     | sort -u  |
xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas

Here's a diff of all the places it found fixed. At this stage I don't
think it's worth it. If someone wants to write a perlcritic policy that
identifies missing trailing commas reasonably comprehensively, we can
look again. Otherwise we should just clean them up as we come across them.

[...]

diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index f387c86..ac19682 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -34,7 +34,7 @@ sub ParseHeader
'Oid'           => 'oid',
'NameData'      => 'name',
'TransactionId' => 'xid',
-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn',);

my %catalog;
my $declaring_attributes = 0;

There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

Or is that going to be handled in a follow-up patch?

Thanks!

Stephen

#18Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#17)
Re: perlcritic and perltidy

Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn',);

There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

Or is that going to be handled in a follow-up patch?

IMO we should classify this one as pointless uglification and move on.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#19Stephen Frost
sfrost@snowman.net
In reply to: Alvaro Herrera (#18)
Re: perlcritic and perltidy

Greetings,

* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:

Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn',);

There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

Or is that going to be handled in a follow-up patch?

IMO we should classify this one as pointless uglification and move on.

I'm all for the change if we actually get to a result where the lines
can be moved and you don't have to go muck with the extra stuff at the
end of the line (add a comma, or remove a comma, remove or add the
parens, etc). If we aren't going all the way to get to that point then
I tend to agree that it's not a useful change to make.

Thanks!

Stephen

#20Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Stephen Frost (#17)
Re: perlcritic and perltidy

On 05/08/2018 12:51 PM, Stephen Frost wrote:

Greetings,

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 10:06 AM, Andrew Dunstan wrote:

{�������� find . -type f -a \( -name
'*.pl' -o -name '*.pm' \) -print;�������� find . -type f -perm -100
-exec file {} \; -print��������������� | egrep -i
':.*perl[0-9]*\>'��������������� | cut -d: -f1;���� }���� | sort -u� |
xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas

Here's a diff of all the places it found fixed. At this stage I don't
think it's worth it. If someone wants to write a perlcritic policy that
identifies missing trailing commas reasonably comprehensively, we can
look again. Otherwise we should just clean them up as we come across them.

[...]

diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index f387c86..ac19682 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -34,7 +34,7 @@ sub ParseHeader
'Oid'           => 'oid',
'NameData'      => 'name',
'TransactionId' => 'xid',
-		'XLogRecPtr'    => 'pg_lsn');
+		'XLogRecPtr'    => 'pg_lsn',);

my %catalog;
my $declaring_attributes = 0;

There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

No, not really.

Or is that going to be handled in a follow-up patch?

No, the current proposal is to keep the vertical tightness settings for
parentheses, which is precisely this set of cases, because otherwise
there are some ugly code efects (see Peter's email upthread)

So I think we're all in agreement to fortget this trailing comma thing.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#21Stephen Frost
sfrost@snowman.net
In reply to: Andrew Dunstan (#20)
Re: perlcritic and perltidy

Andrew,

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 12:51 PM, Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

No, not really.

Or is that going to be handled in a follow-up patch?

No, the current proposal is to keep the vertical tightness settings for
parentheses, which is precisely this set of cases, because otherwise
there are some ugly code efects (see Peter's email upthread)

So I think we're all in agreement to fortget this trailing comma thing.

Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
have them since those will end up on their own line, right?

Thanks!

Stephen

#22Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Stephen Frost (#21)
Re: perlcritic and perltidy

On 05/08/2018 01:18 PM, Stephen Frost wrote:

Andrew,

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 12:51 PM, Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

No, not really.

Or is that going to be handled in a follow-up patch?

No, the current proposal is to keep the vertical tightness settings for
parentheses, which is precisely this set of cases, because otherwise
there are some ugly code efects (see Peter's email upthread)

So I think we're all in agreement to fortget this trailing comma thing.

Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
have them since those will end up on their own line, right?

Yes, but there isn't a perlcritic policy I can find that detects them
reliably. If you know of one we can revisit it. Specifically, the one
from the Pulp collection called RequireTrailingCommaAtNewline didn't
work very well when I tried it.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#23Stephen Frost
sfrost@snowman.net
In reply to: Andrew Dunstan (#22)
Re: perlcritic and perltidy

Andrew,

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 01:18 PM, Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:

On 05/08/2018 12:51 PM, Stephen Frost wrote:

* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:
There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

No, not really.

Or is that going to be handled in a follow-up patch?

No, the current proposal is to keep the vertical tightness settings for
parentheses, which is precisely this set of cases, because otherwise
there are some ugly code efects (see Peter's email upthread)

So I think we're all in agreement to fortget this trailing comma thing.

Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
have them since those will end up on their own line, right?

Yes, but there isn't a perlcritic policy I can find that detects them
reliably. If you know of one we can revisit it. Specifically, the one
from the Pulp collection called RequireTrailingCommaAtNewline didn't
work very well when I tried it.

Ok, perhaps we can't automate/enforce it, but if everyone is agreed on
it then we should at least consider it something of a policy and, as you
said up-thread, clean things up as we come to them. I'd love to clean
up the pg_dump regression tests in such a way to make it simpler to work
with in the future, as long as we're agreed on it and we don't end up
getting complaints from perlcritic/perltiday or having them end up
being removed..

Thanks!

Stephen

#24Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#2)
Re: perlcritic and perltidy

On Sun, May 6, 2018 at 11:53:34AM -0400, Tom Lane wrote:

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Since we just went to a new perltidy version, and made some other
policy changes for it, in HEAD, it'd make sense to make any further
changes in this same release cycle rather than drip drip drip over
multiple cycles. We just need to get some consensus about what
style we like.

I saw you looking for feedback so I wanted to give mine. Also, Andrew,
thanks for working on this --- it is a big help to have limited Perl
critic reports and good tidiness.

I am using the src/tools/pgindent/perltidyrc setting for my own Perl
code, but needed to add these two:

--noblanks-before-comments
--break-after-all-operators

The first one fixes odd blank lines when I put comments inside
conditional tests, e.g.:

if (!$options{args_supplied} &&
!$is_debug &&
defined($stat_main) &&
defined($stat_cache) &&
$stat_main->mtime < $stat_cache->mtime &&
# is local time zone?
(!defined($ENV{TZ}) || $ENV{TZ} =~ m/^E.T$/))

Without the first option, I get:

if (!$options{args_supplied} &&
!$is_debug &&
defined($stat_main) &&
defined($stat_cache) &&
$stat_main->mtime < $stat_cache->mtime &&
-->
# is local time zone?
(!defined($ENV{TZ}) || $ENV{TZ} =~ m/^E.T$/))

which just looks odd to me. Am I the only person who often does this?

The second option, --break-after-all-operators, is more of a personal
taste, but it does match how our C code works, and people have said I
write C code in Perl. ;-)

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +
#25Andrew Dunstan
andrew.dunstan@2ndquadrant.com
In reply to: Bruce Momjian (#24)
Re: perlcritic and perltidy

On 05/25/2018 03:04 PM, Bruce Momjian wrote:

On Sun, May 6, 2018 at 11:53:34AM -0400, Tom Lane wrote:

What sort of changes do we get if we remove those two flags as you prefer?
It'd help to see some examples.

Since we just went to a new perltidy version, and made some other
policy changes for it, in HEAD, it'd make sense to make any further
changes in this same release cycle rather than drip drip drip over
multiple cycles. We just need to get some consensus about what
style we like.

I saw you looking for feedback so I wanted to give mine. Also, Andrew,
thanks for working on this --- it is a big help to have limited Perl
critic reports and good tidiness.

I am using the src/tools/pgindent/perltidyrc setting for my own Perl
code, but needed to add these two:

--noblanks-before-comments
--break-after-all-operators

The first one fixes odd blank lines when I put comments inside
conditional tests, e.g.:

if (!$options{args_supplied} &&
!$is_debug &&
defined($stat_main) &&
defined($stat_cache) &&
$stat_main->mtime < $stat_cache->mtime &&
# is local time zone?
(!defined($ENV{TZ}) || $ENV{TZ} =~ m/^E.T$/))

Without the first option, I get:

if (!$options{args_supplied} &&
!$is_debug &&
defined($stat_main) &&
defined($stat_cache) &&
$stat_main->mtime < $stat_cache->mtime &&
-->
# is local time zone?
(!defined($ENV{TZ}) || $ENV{TZ} =~ m/^E.T$/))

which just looks odd to me. Am I the only person who often does this?

The second option, --break-after-all-operators, is more of a personal
taste, but it does match how our C code works, and people have said I
write C code in Perl. ;-)

I agree with adding --no-blanks-before-comments. That doesn't remove any
blank lines, it just stops perltidy from adding them before comments, so
adding it to the perltidyrc doesn't change anything.

I looked at --break-after-all-operators, but I didn't like the result. I
tried to refine it by adding --want-break-before='. : ? && || and or'.
However, it didn't do what it was told in the case of ? operators. That
seems like a perltidy bug. The bug persists even in the latest version
of perltidy. So I think we should just leave things as they are in this
respect.

cheers

andrew

--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services