emacs configuration for new perltidy settings
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.
(defun pgsql-perl-style ()
"Perl style adjusted for PostgreSQL project"
(interactive)
(setq tab-width 4)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4)
(setq perl-continued-brace-offset 4)
(setq perl-brace-offset 0)
(setq perl-brace-imaginary-offset 0)
(setq perl-label-offset -2))
(add-hook 'perl-mode-hook
(lambda ()
(if (string-match "postgresql" buffer-file-name)
(pgsql-perl-style))))
Peter Eisentraut <peter_e@gmx.net> writes:
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.
Thanks!
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Thu, Jul 12, 2012 at 6:35 AM, Peter Eisentraut <peter_e@gmx.net> wrote:
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.(defun pgsql-perl-style ()
"Perl style adjusted for PostgreSQL project"
(interactive)
(setq tab-width 4)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4)
(setq perl-continued-brace-offset 4)
(setq perl-brace-offset 0)
(setq perl-brace-imaginary-offset 0)
(setq perl-label-offset -2))(add-hook 'perl-mode-hook
(lambda ()
(if (string-match "postgresql" buffer-file-name)
(pgsql-perl-style))))
Cool thanks!
Very helpful.
--
Michael Paquier
http://michael.otacoo.com
On Thu, Jul 12, 2012 at 12:35:26AM +0300, Peter Eisentraut wrote:
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.(defun pgsql-perl-style ()
"Perl style adjusted for PostgreSQL project"
(interactive)
(setq tab-width 4)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4)
(setq perl-continued-brace-offset 4)
(setq perl-brace-offset 0)
(setq perl-brace-imaginary-offset 0)
(setq perl-label-offset -2))(add-hook 'perl-mode-hook
(lambda ()
(if (string-match "postgresql" buffer-file-name)
(pgsql-perl-style))))
Added to src/tools/editors/emacs.samples; applied patch attached.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
Attachments:
perl.difftext/x-diff; charset=us-asciiDownload
diff --git a/src/tools/editors/emacs.samples b/src/tools/editors/emacs.samples
new file mode 100644
index d9cfa2f..c8d8d07
*** a/src/tools/editors/emacs.samples
--- b/src/tools/editors/emacs.samples
***************
*** 12,17 ****
--- 12,19 ----
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+ ;;; Mode for C files to match src/tools/pgindent/pgindent formatting
+
;;; This set is known to work with old versions of emacs
(setq auto-mode-alist
***************
*** 80,85 ****
--- 82,107 ----
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+ ;;; Mode for Perl files to match src/tools/pgindent/perltidyrc formatting
+
+ (defun pgsql-perl-style ()
+ "Perl style adjusted for PostgreSQL project"
+ (interactive)
+ (setq tab-width 4)
+ (setq perl-indent-level 4)
+ (setq perl-continued-statement-offset 4)
+ (setq perl-continued-brace-offset 4)
+ (setq perl-brace-offset 0)
+ (setq perl-brace-imaginary-offset 0)
+ (setq perl-label-offset -2))
+
+ (add-hook 'perl-mode-hook
+ (lambda ()
+ (if (string-match "postgresql" buffer-file-name)
+ (pgsql-perl-style))))
+
+ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
;;; To work on the documentation, the following (or a variant, as above)
;;; can be helpful.
On Thu, Jul 12, 2012 at 12:35:26AM +0300, Peter Eisentraut wrote:
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.(defun pgsql-perl-style ()
"Perl style adjusted for PostgreSQL project"
(interactive)
(setq tab-width 4)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4)
(setq perl-continued-brace-offset 4)
(Later, commit 56fb890 changed perl-continued-statement-offset to 2.) This
indents braces (perltidy aligns the brace with "if", but perl-mode adds
perl-continued-statement-offset + perl-continued-brace-offset = 6 columns):
if (-s "src/backend/snowball/stopwords/$lang.stop")
{
$stop = ", StopWords=$lang";
}
If I run perltidy on 60d9979, then run perl-mode indent, the diff between the
perltidy run and perl-mode indent run is:
129 files changed, 8468 insertions(+), 8468 deletions(-)
If I add (perl-continued-brace-offset . -2):
119 files changed, 3515 insertions(+), 3515 deletions(-)
If I add (perl-indent-continued-arguments . 4) as well:
86 files changed, 2626 insertions(+), 2626 deletions(-)
If I add (perl-indent-parens-as-block . t) as well:
65 files changed, 2373 insertions(+), 2373 deletions(-)
That's with GNU Emacs 24.5.1. Versions 24.3.1 and 21.4.1 show similar trends,
though 21.4.1 predates perl-indent-continued-arguments and
perl-indent-parens-as-block.
I'm attaching the patch to make it so, along with a patch that illustrates my
testing method. "sh reindent-perl.sh" will test emacs.samples using your
Emacs installation. (I don't plan to push the testing patch.)
Attachments:
perl-mode-indent-v1.patchtext/x-diff; charset=us-asciiDownload
diff --git a/.dir-locals.el b/.dir-locals.el
index eff4671..ab6208b 100644
--- a/.dir-locals.el
+++ b/.dir-locals.el
@@ -9,7 +9,7 @@
(indent-tabs-mode . nil)))
(perl-mode . ((perl-indent-level . 4)
(perl-continued-statement-offset . 2)
- (perl-continued-brace-offset . 4)
+ (perl-continued-brace-offset . -2)
(perl-brace-offset . 0)
(perl-brace-imaginary-offset . 0)
(perl-label-offset . -2)
diff --git a/src/tools/editors/emacs.samples b/src/tools/editors/emacs.samples
index a7152b0..529c98a 100644
--- a/src/tools/editors/emacs.samples
+++ b/src/tools/editors/emacs.samples
@@ -47,10 +47,13 @@
(interactive)
(setq perl-brace-imaginary-offset 0)
(setq perl-brace-offset 0)
- (setq perl-continued-brace-offset 4)
(setq perl-continued-statement-offset 2)
+ (setq perl-continued-brace-offset (- perl-continued-statement-offset))
(setq perl-indent-level 4)
(setq perl-label-offset -2)
+ ;; Next two aren't marked safe-local-variable, so .dir-locals.el omits them.
+ (setq perl-indent-continued-arguments 4)
+ (setq perl-indent-parens-as-block t)
(setq indent-tabs-mode t)
(setq tab-width 4))
test-perl-mode-indent-v1.patchtext/x-diff; charset=us-asciiDownload
diff --git a/reindent-perl.el b/reindent-perl.el
new file mode 100644
index 0000000..17ff125
--- /dev/null
+++ b/reindent-perl.el
@@ -0,0 +1,45 @@
+;; Import with-demoted-errors from Emacs 24.5.1 (present in 23.1+).
+(when (not (fboundp 'macroexp-progn))
+ (defun macroexp-progn (exps)
+ "Return an expression equivalent to `(progn ,@EXPS)."
+ (if (cdr exps) `(progn ,@exps) (car exps))))
+
+(when (not (fboundp 'with-demoted-errors))
+ (defmacro with-demoted-errors (format &rest body)
+ "Run BODY and demote any errors to simple messages.
+FORMAT is a string passed to `message' to format any error message.
+It should contain a single %-sequence; e.g., \"Error: %S\".
+
+If `debug-on-error' is non-nil, run BODY without catching its errors.
+This is to be used around code which is not expected to signal an error
+but which should be robust in the unexpected case that an error is signaled.
+
+For backward compatibility, if FORMAT is not a constant string, it
+is assumed to be part of BODY, in which case the message format
+used is \"Error: %S\"."
+ (let ((err (make-symbol "err"))
+ (format (if (and (stringp format) body) format
+ (prog1 "Error: %S"
+ (if format (push format body))))))
+ `(condition-case ,err
+ ,(macroexp-progn body)
+ (error (message ,format ,err) nil)))))
+
+
+(load (expand-file-name "./src/tools/editors/emacs.samples"))
+(setq enable-local-variables :all) ; not actually needed, given emacs.samples
+
+(while (setq fname (read-from-minibuffer "File name: "))
+ (set-buffer "*scratch*")
+ (find-file fname)
+ (message "%s"
+ (list
+ (current-buffer)
+ perl-indent-level
+ tab-width
+ (if (boundp 'perl-indent-continued-arguments)
+ perl-indent-continued-arguments "no-such-var")
+ (if (boundp 'perl-indent-parens-as-block)
+ perl-indent-parens-as-block "no-such-var")))
+ (with-demoted-errors (indent-region (point-min) (point-max) nil))
+ (save-buffer))
diff --git a/reindent-perl.sh b/reindent-perl.sh
new file mode 100644
index 0000000..ae34646
--- /dev/null
+++ b/reindent-perl.sh
@@ -0,0 +1,10 @@
+# Reindent Perl source using Emacs perl-mode. This tests how well perl-mode
+# matches perltidy.
+
+if [ -n "$1" ]; then
+ for arg; do printf '%s\n' "$arg"; done # Try specific files.
+else
+ . src/tools/perlcheck/find_perl_files
+ find_perl_files
+fi | emacs -no-site-file -batch -load reindent-perl.el
+git diff --shortstat
On 1/3/19 12:53 AM, Noah Misch wrote:
On Thu, Jul 12, 2012 at 12:35:26AM +0300, Peter Eisentraut wrote:
This might be useful for some people. Here is an emacs configuration
for perl-mode that is compatible with the new perltidy settings. Note
that the default perl-mode settings produce indentation that will be
completely shredded by the new perltidy settings.(defun pgsql-perl-style ()
"Perl style adjusted for PostgreSQL project"
(interactive)
(setq tab-width 4)
(setq perl-indent-level 4)
(setq perl-continued-statement-offset 4)
(setq perl-continued-brace-offset 4)(Later, commit 56fb890 changed perl-continued-statement-offset to 2.) This
indents braces (perltidy aligns the brace with "if", but perl-mode adds
perl-continued-statement-offset + perl-continued-brace-offset = 6 columns):if (-s "src/backend/snowball/stopwords/$lang.stop")
{
$stop = ", StopWords=$lang";
}If I run perltidy on 60d9979, then run perl-mode indent, the diff between the
perltidy run and perl-mode indent run is:
129 files changed, 8468 insertions(+), 8468 deletions(-)
If I add (perl-continued-brace-offset . -2):
119 files changed, 3515 insertions(+), 3515 deletions(-)
If I add (perl-indent-continued-arguments . 4) as well:
86 files changed, 2626 insertions(+), 2626 deletions(-)
If I add (perl-indent-parens-as-block . t) as well:
65 files changed, 2373 insertions(+), 2373 deletions(-)That's with GNU Emacs 24.5.1. Versions 24.3.1 and 21.4.1 show similar trends,
though 21.4.1 predates perl-indent-continued-arguments and
perl-indent-parens-as-block.I'm attaching the patch to make it so, along with a patch that illustrates my
testing method. "sh reindent-perl.sh" will test emacs.samples using your
Emacs installation. (I don't plan to push the testing patch.)
Sounds good. What do the remaining diffs look like?
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Jan 08, 2019 at 08:17:43AM -0500, Andrew Dunstan wrote:
On 1/3/19 12:53 AM, Noah Misch wrote:
If I run perltidy on 60d9979, then run perl-mode indent, the diff between the
perltidy run and perl-mode indent run is:
129 files changed, 8468 insertions(+), 8468 deletions(-)
If I add (perl-continued-brace-offset . -2):
119 files changed, 3515 insertions(+), 3515 deletions(-)
If I add (perl-indent-continued-arguments . 4) as well:
86 files changed, 2626 insertions(+), 2626 deletions(-)
If I add (perl-indent-parens-as-block . t) as well:
65 files changed, 2373 insertions(+), 2373 deletions(-)
Sounds good. What do the remaining diffs look like?
I've attached them. Most involve statement continuation in some form. For
example, src/backend/utils/mb/Unicode has numerous instances where perl-mode
indents hashref-constructor curly braces as though they were code blocks.
Other diff lines involve labels. Others are in string literals.
Attachments:
perltidy-to-perl-mode-indent.difftext/plain; charset=us-asciiDownload
diff --git a/contrib/intarray/bench/bench.pl b/contrib/intarray/bench/bench.pl
index 92035d6..a16abfe 100755
--- a/contrib/intarray/bench/bench.pl
+++ b/contrib/intarray/bench/bench.pl
@@ -14,21 +14,21 @@ getopts('d:b:s:veorauc', \%opt);
if (!(scalar %opt && defined $opt{s}))
{
print <<EOT;
-Usage:
-$0 -d DATABASE -s SECTIONS [-b NUMBER] [-v] [-e] [-o] [-r] [-a] [-u]
--d DATABASE -DATABASE
--b NUMBER -number of repeats
--s SECTIONS -sections, format sid1[,sid2[,sid3[...]]]]
--v -verbose (show SQL)
--e -show explain
--r -use RD-tree index
--a -AND section
--o -show output
--u -unique
--c -count
+ Usage:
+ $0 -d DATABASE -s SECTIONS [-b NUMBER] [-v] [-e] [-o] [-r] [-a] [-u]
+ -d DATABASE -DATABASE
+ -b NUMBER -number of repeats
+ -s SECTIONS -sections, format sid1[,sid2[,sid3[...]]]]
+ -v -verbose (show SQL)
+ -e -show explain
+ -r -use RD-tree index
+ -a -AND section
+ -o -show output
+ -u -unique
+ -c -count
-EOT
- exit;
+ EOT
+ exit;
}
$opt{d} ||= '_int4';
@@ -73,14 +73,14 @@ my $outf;
if ($opt{c})
{
$outf =
- ($opt{u}) ? 'count( distinct message.mid )' : 'count( message.mid )';
+ ($opt{u}) ? 'count( distinct message.mid )' : 'count( message.mid )';
}
else
{
$outf = ($opt{u}) ? 'distinct( message.mid )' : 'message.mid';
}
my $sql =
- "select $outf from "
+ "select $outf from "
. join(', ', keys %table)
. " where "
. join(' AND ', @where) . ';';
diff --git a/contrib/intarray/bench/create_test.pl b/contrib/intarray/bench/create_test.pl
index d2c678b..e1ec21e 100755
--- a/contrib/intarray/bench/create_test.pl
+++ b/contrib/intarray/bench/create_test.pl
@@ -15,7 +15,7 @@ create table message_section_map (
EOT
-open(my $msg, '>', "message.tmp") || die;
+ open(my $msg, '>', "message.tmp") || die;
open(my $map, '>', "message_section_map.tmp") || die;
srand(1);
@@ -38,7 +38,7 @@ foreach my $i (1 .. 200000)
my %hash;
@sect =
grep { $hash{$_}++; $hash{$_} <= 1 }
- map { int((rand()**4) * 100) } 0 .. (int(rand() * 5));
+ map { int((rand()**4) * 100) } 0 .. (int(rand() * 5));
}
if ($#sect < 0 || rand() < 0.1)
{
@@ -72,7 +72,7 @@ select count(*) from message_section_map;
EOT
-unlink 'message.tmp', 'message_section_map.tmp';
+ unlink 'message.tmp', 'message_section_map.tmp';
sub copytable
{
diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl
index ebec431..2931826 100644
--- a/doc/src/sgml/generate-errcodes-table.pl
+++ b/doc/src/sgml/generate-errcodes-table.pl
@@ -44,7 +44,7 @@ while (<$errcodes>)
die unless /^([^\s]{5})\s+([EWS])\s+([^\s]+)(?:\s+)?([^\s]+)?/;
(my $sqlstate, my $type, my $errcode_macro, my $condition_name) =
- ($1, $2, $3, $4);
+ ($1, $2, $3, $4);
# Skip lines without PL/pgSQL condition names
next unless defined($condition_name);
diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index d5c096f..af818dc 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -90,17 +90,17 @@ sub ParseHeader
if (/^DECLARE_TOAST\(\s*(\w+),\s*(\d+),\s*(\d+)\)/)
{
push @{ $catalog{toasting} },
- { parent_table => $1, toast_oid => $2, toast_index_oid => $3 };
+ { parent_table => $1, toast_oid => $2, toast_index_oid => $3 };
}
elsif (/^DECLARE_(UNIQUE_)?INDEX\(\s*(\w+),\s*(\d+),\s*(.+)\)/)
{
push @{ $catalog{indexing} },
- {
+ {
is_unique => $1 ? 1 : 0,
index_name => $2,
index_oid => $3,
index_decl => $4
- };
+ };
}
elsif (/^CATALOG\((\w+),(\d+),(\w+)\)/)
{
@@ -354,7 +354,7 @@ sub AddDefaultValues
if (@missing_fields)
{
die sprintf "missing values for field(s) %s in %s.dat line %s\n",
- join(', ', @missing_fields), $catname, $row->{line_number};
+ join(', ', @missing_fields), $catname, $row->{line_number};
}
}
@@ -509,7 +509,7 @@ sub FindAllOidsFromHeaders
if (!$catalog->{bootstrap})
{
push @oids, $catalog->{relation_oid}
- if ($catalog->{relation_oid});
+ if ($catalog->{relation_oid});
push @oids, $catalog->{rowtype_oid} if ($catalog->{rowtype_oid});
}
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
index fe9faba..5451b9f 100644
--- a/src/backend/catalog/genbki.pl
+++ b/src/backend/catalog/genbki.pl
@@ -118,19 +118,19 @@ foreach my $header (@input_files)
foreach my $toast (@{ $catalog->{toasting} })
{
push @toast_decls,
- sprintf "declare toast %s %s on %s\n",
- $toast->{toast_oid}, $toast->{toast_index_oid},
- $toast->{parent_table};
+ sprintf "declare toast %s %s on %s\n",
+ $toast->{toast_oid}, $toast->{toast_index_oid},
+ $toast->{parent_table};
$oidcounts{ $toast->{toast_oid} }++;
$oidcounts{ $toast->{toast_index_oid} }++;
}
foreach my $index (@{ $catalog->{indexing} })
{
push @index_decls,
- sprintf "declare %sindex %s %s %s\n",
- $index->{is_unique} ? 'unique ' : '',
- $index->{index_name}, $index->{index_oid},
- $index->{index_decl};
+ sprintf "declare %sindex %s %s %s\n",
+ $index->{is_unique} ? 'unique ' : '',
+ $index->{index_name}, $index->{index_oid},
+ $index->{index_decl};
$oidcounts{ $index->{index_oid} }++;
}
}
@@ -154,7 +154,7 @@ die "found $found duplicate OID(s) in catalog data\n" if $found;
# starting at FirstGenbkiObjectId.
my $FirstGenbkiObjectId =
Catalog::FindDefinedSymbol('access/transam.h', $include_path,
- 'FirstGenbkiObjectId');
+ 'FirstGenbkiObjectId');
my $GenbkiNextOid = $FirstGenbkiObjectId;
@@ -166,13 +166,13 @@ my $GenbkiNextOid = $FirstGenbkiObjectId;
# to handle those sorts of things is in initdb.c's bootstrap_template1().)
my $BOOTSTRAP_SUPERUSERID =
Catalog::FindDefinedSymbolFromData($catalog_data{pg_authid},
- 'BOOTSTRAP_SUPERUSERID');
+ 'BOOTSTRAP_SUPERUSERID');
my $C_COLLATION_OID =
Catalog::FindDefinedSymbolFromData($catalog_data{pg_collation},
- 'C_COLLATION_OID');
+ 'C_COLLATION_OID');
my $PG_CATALOG_NAMESPACE =
Catalog::FindDefinedSymbolFromData($catalog_data{pg_namespace},
- 'PG_CATALOG_NAMESPACE');
+ 'PG_CATALOG_NAMESPACE');
# Build lookup tables for OID macro substitutions and for pg_attribute
@@ -202,7 +202,7 @@ foreach my $row (@{ $catalog_data{pg_operator} })
# There is no unique name, so we need to invent one that contains
# the relevant type names.
my $key = sprintf "%s(%s,%s)",
- $row->{oprname}, $row->{oprleft}, $row->{oprright};
+ $row->{oprname}, $row->{oprleft}, $row->{oprright};
$operoids{$key} = $row->{oid};
}
@@ -303,7 +303,7 @@ foreach my $catname (@catnames)
# Opening boilerplate for pg_*_d.h
printf $def <<EOM, $catname, $catname, uc $catname, uc $catname;
-/*-------------------------------------------------------------------------
+ /*-------------------------------------------------------------------------
*
* %s_d.h
* Macro definitions for %s
@@ -435,93 +435,93 @@ EOM
# Substitute constant values we acquired above.
# (It's intentional that this can apply to parts of a field).
$bki_values{$attname} =~ s/\bPGUID\b/$BOOTSTRAP_SUPERUSERID/g;
- $bki_values{$attname} =~ s/\bPGNSP\b/$PG_CATALOG_NAMESPACE/g;
+ $bki_values{$attname} =~ s/\bPGNSP\b/$PG_CATALOG_NAMESPACE/g;
- # Replace OID synonyms with OIDs per the appropriate lookup rule.
- #
- # If the column type is oidvector or _oid, we have to replace
- # each element of the array as per the lookup rule.
- if ($column->{lookup})
- {
- my $lookup = $lookup_kind{ $column->{lookup} };
- my @lookupnames;
- my @lookupoids;
-
- die "unrecognized BKI_LOOKUP type " . $column->{lookup}
- if !defined($lookup);
-
- if ($atttype eq 'oidvector')
- {
- @lookupnames = split /\s+/, $bki_values{$attname};
- @lookupoids = lookup_oids($lookup, $catname, \%bki_values,
- @lookupnames);
- $bki_values{$attname} = join(' ', @lookupoids);
- }
- elsif ($atttype eq '_oid')
- {
- if ($bki_values{$attname} ne '_null_')
- {
- $bki_values{$attname} =~ s/[{}]//g;
- @lookupnames = split /,/, $bki_values{$attname};
- @lookupoids =
- lookup_oids($lookup, $catname, \%bki_values,
- @lookupnames);
- $bki_values{$attname} = sprintf "{%s}",
- join(',', @lookupoids);
- }
- }
- else
- {
- $lookupnames[0] = $bki_values{$attname};
- @lookupoids = lookup_oids($lookup, $catname, \%bki_values,
- @lookupnames);
- $bki_values{$attname} = $lookupoids[0];
- }
- }
- }
+ # Replace OID synonyms with OIDs per the appropriate lookup rule.
+ #
+ # If the column type is oidvector or _oid, we have to replace
+ # each element of the array as per the lookup rule.
+ if ($column->{lookup})
+ {
+ my $lookup = $lookup_kind{ $column->{lookup} };
+ my @lookupnames;
+ my @lookupoids;
- # Special hack to generate OID symbols for pg_type entries
- # that lack one.
- if ($catname eq 'pg_type' and !exists $bki_values{oid_symbol})
+ die "unrecognized BKI_LOOKUP type " . $column->{lookup}
+ if !defined($lookup);
+
+ if ($atttype eq 'oidvector')
{
- my $symbol = form_pg_type_symbol($bki_values{typname});
- $bki_values{oid_symbol} = $symbol
- if defined $symbol;
+ @lookupnames = split /\s+/, $bki_values{$attname};
+ @lookupoids = lookup_oids($lookup, $catname, \%bki_values,
+ @lookupnames);
+ $bki_values{$attname} = join(' ', @lookupoids);
}
-
- # Write to postgres.bki
- print_bki_insert(\%bki_values, $schema);
-
- # Write comments to postgres.description and
- # postgres.shdescription
- if (defined $bki_values{descr})
+ elsif ($atttype eq '_oid')
{
- if ($catalog->{shared_relation})
+ if ($bki_values{$attname} ne '_null_')
{
- printf $shdescr "%s\t%s\t%s\n",
- $bki_values{oid}, $catname, $bki_values{descr};
- }
- else
- {
- printf $descr "%s\t%s\t0\t%s\n",
- $bki_values{oid}, $catname, $bki_values{descr};
+ $bki_values{$attname} =~ s/[{}]//g;
+ @lookupnames = split /,/, $bki_values{$attname};
+ @lookupoids =
+ lookup_oids($lookup, $catname, \%bki_values,
+ @lookupnames);
+ $bki_values{$attname} = sprintf "{%s}",
+ join(',', @lookupoids);
}
}
-
- # Emit OID symbol
- if (defined $bki_values{oid_symbol})
+ else
{
- printf $def "#define %s %s\n",
- $bki_values{oid_symbol}, $bki_values{oid};
+ $lookupnames[0] = $bki_values{$attname};
+ @lookupoids = lookup_oids($lookup, $catname, \%bki_values,
+ @lookupnames);
+ $bki_values{$attname} = $lookupoids[0];
}
}
+}
+
+# Special hack to generate OID symbols for pg_type entries
+# that lack one.
+if ($catname eq 'pg_type' and !exists $bki_values{oid_symbol})
+{
+ my $symbol = form_pg_type_symbol($bki_values{typname});
+ $bki_values{oid_symbol} = $symbol
+ if defined $symbol;
+}
+
+# Write to postgres.bki
+print_bki_insert(\%bki_values, $schema);
+
+# Write comments to postgres.description and
+# postgres.shdescription
+if (defined $bki_values{descr})
+{
+ if ($catalog->{shared_relation})
+ {
+ printf $shdescr "%s\t%s\t%s\n",
+ $bki_values{oid}, $catname, $bki_values{descr};
+ }
+ else
+ {
+ printf $descr "%s\t%s\t0\t%s\n",
+ $bki_values{oid}, $catname, $bki_values{descr};
+ }
+}
- print $bki "close $catname\n";
- printf $def "\n#endif\t\t\t\t\t\t\t/* %s_D_H */\n", uc $catname;
+# Emit OID symbol
+if (defined $bki_values{oid_symbol})
+{
+ printf $def "#define %s %s\n",
+ $bki_values{oid_symbol}, $bki_values{oid};
+}
+}
+
+print $bki "close $catname\n";
+printf $def "\n#endif\t\t\t\t\t\t\t/* %s_D_H */\n", uc $catname;
- # Close and rename definition header
- close $def;
- Catalog::RenameTempFile($def_file, $tmpext);
+# Close and rename definition header
+close $def;
+Catalog::RenameTempFile($def_file, $tmpext);
}
# Any information needed for the BKI that is not contained in a pg_*.h header
@@ -560,9 +560,9 @@ print $schemapg <<EOM;
* ******************************
*
* It has been GENERATED by src/backend/catalog/genbki.pl
- *
- *-------------------------------------------------------------------------
- */
+ *
+ *-------------------------------------------------------------------------
+ */
#ifndef SCHEMAPG_H
#define SCHEMAPG_H
EOM
@@ -639,8 +639,8 @@ sub gen_pg_attribute
# Store schemapg entries for later.
morph_row_for_schemapg(\%row, $schema);
push @{ $schemapg_entries{$table_name} },
- sprintf "{ %s }",
- join(', ', grep { defined $_ } @row{@attnames});
+ sprintf "{ %s }",
+ join(', ', grep { defined $_ } @row{@attnames});
}
# Generate entries for system attributes.
@@ -716,7 +716,7 @@ sub morph_row_for_pgattr
# compare DefineAttr in bootstrap.c. oidvector and
# int2vector are also treated as not-nullable.
$row->{attnotnull} =
- $type->{typname} eq 'oidvector' ? 't'
+ $type->{typname} eq 'oidvector' ? 't'
: $type->{typname} eq 'int2vector' ? 't'
: $type->{typlen} eq 'NAMEDATALEN' ? 't'
: $type->{typlen} > 0 ? 't'
@@ -836,7 +836,7 @@ sub lookup_oids
warn sprintf
"unresolved OID reference \"%s\" in %s.dat line %s\n",
$lookupname, $catname, $bki_values->{line_number}
- if $lookupname ne '-' and $lookupname ne '0';
+ if $lookupname ne '-' and $lookupname ne '0';
}
}
return @lookupoids;
@@ -850,7 +850,7 @@ sub form_pg_type_symbol
# Skip for rowtypes of bootstrap catalogs, since they have their
# own naming convention defined elsewhere.
return
- if $typename eq 'pg_type'
+ if $typename eq 'pg_type'
or $typename eq 'pg_proc'
or $typename eq 'pg_attribute'
or $typename eq 'pg_class';
@@ -867,18 +867,18 @@ sub form_pg_type_symbol
sub usage
{
die <<EOM;
-Usage: genbki.pl [options] header...
+ Usage: genbki.pl [options] header...
-Options:
+ Options:
-I include path
-o output path
--set-version PostgreSQL version number for initdb cross-check
-genbki.pl generates BKI files and symbol definition
-headers from specially formatted header files and .dat
-files. The BKI files are used to initialize the
-postgres template database.
+ genbki.pl generates BKI files and symbol definition
+ headers from specially formatted header files and .dat
+ files. The BKI files are used to initialize the
+ postgres template database.
-Report bugs to <pgsql-bugs\@postgresql.org>.
-EOM
+ Report bugs to <pgsql-bugs\@postgresql.org>.
+ EOM
}
diff --git a/src/backend/parser/check_keywords.pl b/src/backend/parser/check_keywords.pl
index 718441c..28443e2 100644
--- a/src/backend/parser/check_keywords.pl
+++ b/src/backend/parser/check_keywords.pl
@@ -37,7 +37,7 @@ my $comment;
my @arr;
my %keywords;
-line: while (my $S = <$gram>)
+ line: while (my $S = <$gram>)
{
chomp $S; # strip record separator
@@ -156,7 +156,7 @@ open(my $kwlist, '<', $kwlist_filename)
my $prevkwstring = '';
my $bare_kwname;
my %kwhash;
-kwlist_line: while (<$kwlist>)
+ kwlist_line: while (<$kwlist>)
{
my ($line) = $_;
diff --git a/src/backend/utils/Gen_dummy_probes.pl b/src/backend/utils/Gen_dummy_probes.pl
index 91d7968..89da542 100644
--- a/src/backend/utils/Gen_dummy_probes.pl
+++ b/src/backend/utils/Gen_dummy_probes.pl
@@ -146,9 +146,9 @@ sub Run()
$CondReg ||= $s;
}
EOS: if ($doPrint)
- {
- print $_, "\n";
- }
+ {
+ print $_, "\n";
+ }
else
{
$doPrint = $doAutoPrint;
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index ed16737..d0c6ab2 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -81,10 +81,10 @@ foreach my $datfile (@input_files)
# Fetch some values for later.
my $FirstGenbkiObjectId =
Catalog::FindDefinedSymbol('access/transam.h', $include_path,
- 'FirstGenbkiObjectId');
+ 'FirstGenbkiObjectId');
my $INTERNALlanguageId =
Catalog::FindDefinedSymbolFromData($catalog_data{pg_language},
- 'INTERNALlanguageId');
+ 'INTERNALlanguageId');
# Collect certain fields from pg_proc.dat.
my @fmgr = ();
@@ -97,13 +97,13 @@ foreach my $row (@{ $catalog_data{pg_proc} })
next if $bki_values{prolang} ne $INTERNALlanguageId;
push @fmgr,
- {
+ {
oid => $bki_values{oid},
strict => $bki_values{proisstrict},
retset => $bki_values{proretset},
nargs => $bki_values{pronargs},
prosrc => $bki_values{prosrc},
- };
+ };
}
# Emit headers for both files
@@ -137,43 +137,43 @@ print $ofh <<OFH;
* ******************************
*
* It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl
- *
- *-------------------------------------------------------------------------
- */
+ *
+ *-------------------------------------------------------------------------
+ */
#ifndef FMGROIDS_H
#define FMGROIDS_H
/*
- * Constant macros for the OIDs of entries in pg_proc.
- *
- * NOTE: macros are named after the prosrc value, ie the actual C name
- * of the implementing function, not the proname which may be overloaded.
- * For example, we want to be able to assign different macro names to both
- * char_text() and name_text() even though these both appear with proname
- * 'text'. If the same C function appears in more than one pg_proc entry,
- * its equivalent macro will be defined with the lowest OID among those
- * entries.
- */
+ * Constant macros for the OIDs of entries in pg_proc.
+ *
+ * NOTE: macros are named after the prosrc value, ie the actual C name
+ * of the implementing function, not the proname which may be overloaded.
+ * For example, we want to be able to assign different macro names to both
+ * char_text() and name_text() even though these both appear with proname
+ * 'text'. If the same C function appears in more than one pg_proc entry,
+ * its equivalent macro will be defined with the lowest OID among those
+ * entries.
+ */
OFH
print $pfh <<PFH;
/*-------------------------------------------------------------------------
- *
- * fmgrprotos.h
- * Prototypes for built-in functions.
- *
- * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
- * Portions Copyright (c) 1994, Regents of the University of California
- *
- * NOTES
- * ******************************
- * *** DO NOT EDIT THIS FILE! ***
- * ******************************
- *
- * It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl
- *
- *-------------------------------------------------------------------------
- */
+ *
+ * fmgrprotos.h
+ * Prototypes for built-in functions.
+ *
+ * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * NOTES
+ * ******************************
+ * *** DO NOT EDIT THIS FILE! ***
+ * ******************************
+ *
+ * It has been GENERATED by src/backend/utils/Gen_fmgrtab.pl
+ *
+ *-------------------------------------------------------------------------
+ */
#ifndef FMGRPROTOS_H
#define FMGRPROTOS_H
@@ -184,9 +184,9 @@ PFH
print $tfh <<TFH;
/*-------------------------------------------------------------------------
- *
- * fmgrtab.c
- * The function manager's table of internal functions.
+ *
+ * fmgrtab.c
+ * The function manager's table of internal functions.
*
* Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
@@ -255,8 +255,8 @@ const int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin));
# Note that the array has to be filled up to FirstGenbkiObjectId,
# as we can't rely on zero initialization as 0 is a valid mapping.
print $tfh qq|
-const uint16 fmgr_builtin_oid_index[FirstGenbkiObjectId] = {
-|;
+ const uint16 fmgr_builtin_oid_index[FirstGenbkiObjectId] = {
+ |;
for (my $i = 0; $i < $FirstGenbkiObjectId; $i++)
{
@@ -297,13 +297,13 @@ Catalog::RenameTempFile($tabfile, $tmpext);
sub usage
{
die <<EOM;
-Usage: perl -I [directory of Catalog.pm] Gen_fmgrtab.pl -I [include path] [path to pg_proc.dat]
+ Usage: perl -I [directory of Catalog.pm] Gen_fmgrtab.pl -I [include path] [path to pg_proc.dat]
-Gen_fmgrtab.pl generates fmgroids.h, fmgrprotos.h, and fmgrtab.c from
-pg_proc.dat
+ Gen_fmgrtab.pl generates fmgroids.h, fmgrprotos.h, and fmgrtab.c from
+ pg_proc.dat
-Report bugs to <pgsql-bugs\@postgresql.org>.
-EOM
+ Report bugs to <pgsql-bugs\@postgresql.org>.
+ EOM
}
exit 0;
diff --git a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
index 672d890..c0ca278 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_BIG5.pl
@@ -48,14 +48,14 @@ foreach my $i (@$cp950txt)
&& $code <= 0xf9dc)
{
push @$all,
- {
+ {
code => $code,
ucs => $ucs,
comment => $i->{comment},
direction => BOTH,
f => $i->{f},
l => $i->{l}
- };
+ };
}
}
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 00c1f33..5f08b59 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -40,8 +40,8 @@ while (<$in>)
next if (($code & 0xFF) < 0xA1);
next
if (
- !( $code >= 0xA100 && $code <= 0xA9FF
- || $code >= 0xB000 && $code <= 0xF7FF));
+ !( $code >= 0xA100 && $code <= 0xA9FF
+ || $code >= 0xB000 && $code <= 0xF7FF));
next if ($code >= 0xA2A1 && $code <= 0xA2B0);
next if ($code >= 0xA2E3 && $code <= 0xA2E4);
@@ -70,13 +70,13 @@ while (<$in>)
}
push @mapping,
- {
+ {
ucs => $ucs,
code => $code,
direction => BOTH,
f => $in_file,
l => $.
- };
+ };
}
close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
index 9ad7dd0..e03ff76 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_JIS_2004.pl
@@ -33,7 +33,7 @@ while (my $line = <$in>)
my $ucs2 = hex($u2);
push @all,
- {
+ {
direction => BOTH,
ucs => $ucs1,
ucs_second => $ucs2,
@@ -41,7 +41,7 @@ while (my $line = <$in>)
comment => $rest,
f => $in_file,
l => $.
- };
+ };
}
elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
{
@@ -54,14 +54,14 @@ while (my $line = <$in>)
next if ($code < 0x80 && $ucs < 0x80);
push @all,
- {
+ {
direction => BOTH,
ucs => $ucs,
code => $code,
comment => $rest,
f => $in_file,
l => $.
- };
+ };
}
}
close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
index 4e4e3fd..2d608fb 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_KR.pl
@@ -32,13 +32,13 @@ foreach my $i (@$mapping)
# Some extra characters that are not in KSX1001.TXT
push @$mapping,
- ( {
- direction => BOTH,
- ucs => 0x20AC,
- code => 0xa2e6,
- comment => '# EURO SIGN',
- f => $this_script,
- l => __LINE__
+( {
+ direction => BOTH,
+ ucs => 0x20AC,
+ code => 0xa2e6,
+ comment => '# EURO SIGN',
+ f => $this_script,
+ l => __LINE__
},
{
direction => BOTH,
diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
index 98d4156d..e2d1510 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_TW.pl
@@ -53,14 +53,14 @@ foreach my $i (@$mapping)
if ($origcode >= 0x12121 && $origcode <= 0x20000)
{
push @extras,
- {
+ {
ucs => $i->{ucs},
code => ($i->{code} + 0x8ea10000),
rest => $i->{rest},
direction => TO_UNICODE,
f => $i->{f},
l => $i->{l}
- };
+ };
}
}
diff --git a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
index 65ffee3..b161c2f 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
@@ -36,13 +36,13 @@ while (<$in>)
if ($code >= 0x80 && $ucs >= 0x0080)
{
push @mapping,
- {
+ {
ucs => $ucs,
code => $code,
direction => BOTH,
f => $in_file,
l => $.
- };
+ };
}
}
close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
index 79901dc..aaae850 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_JOHAB.pl
@@ -26,13 +26,13 @@ my $mapping = &read_source("JOHAB.TXT");
# Some extra characters that are not in JOHAB.TXT
push @$mapping,
- ( {
- direction => BOTH,
- ucs => 0x20AC,
- code => 0xd9e6,
- comment => '# EURO SIGN',
- f => $this_script,
- l => __LINE__
+( {
+ direction => BOTH,
+ ucs => 0x20AC,
+ code => 0xd9e6,
+ comment => '# EURO SIGN',
+ f => $this_script,
+ l => __LINE__
},
{
direction => BOTH,
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
index bb84458..e216501 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SHIFT_JIS_2004.pl
@@ -33,7 +33,7 @@ while (my $line = <$in>)
my $ucs2 = hex($u2);
push @mapping,
- {
+ {
code => $code,
ucs => $ucs1,
ucs_second => $ucs2,
@@ -41,7 +41,7 @@ while (my $line = <$in>)
direction => BOTH,
f => $in_file,
l => $.
- };
+ };
}
elsif ($line =~ /^0x(.*)[ \t]*U\+(.*)[ \t]*#(.*)$/)
{
@@ -70,14 +70,14 @@ while (my $line = <$in>)
}
push @mapping,
- {
+ {
code => $code,
ucs => $ucs,
comment => $rest,
direction => $direction,
f => $in_file,
l => $.
- };
+ };
}
}
close($in);
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
index 738c195..7be4e1a 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
@@ -36,13 +36,13 @@ foreach my $i (@$mapping)
# Add these UTF8->SJIS pairs to the table.
push @$mapping,
- ( {
- direction => FROM_UNICODE,
- ucs => 0x00a2,
- code => 0x8191,
- comment => '# CENT SIGN',
- f => $this_script,
- l => __LINE__
+( {
+ direction => FROM_UNICODE,
+ ucs => 0x00a2,
+ code => 0x8191,
+ comment => '# CENT SIGN',
+ f => $this_script,
+ l => __LINE__
},
{
direction => FROM_UNICODE,
diff --git a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
index 4231aaf..524a1d5 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_UHC.pl
@@ -39,26 +39,26 @@ while (<$in>)
if ($code >= 0x80 && $ucs >= 0x0080)
{
push @mapping,
- {
+ {
ucs => $ucs,
code => $code,
direction => BOTH,
f => $in_file,
l => $.
- };
+ };
}
}
close($in);
# One extra character that's not in the source file.
push @mapping,
- {
+{
direction => BOTH,
code => 0xa2e8,
ucs => 0x327e,
comment => 'CIRCLED HANGUL IEUNG U',
f => $this_script,
l => __LINE__
- };
+};
print_conversion_tables($this_script, "UHC", \@mapping);
diff --git a/src/backend/utils/mb/Unicode/convutils.pm b/src/backend/utils/mb/Unicode/convutils.pm
index b3e2dd0..8fe160f 100644
--- a/src/backend/utils/mb/Unicode/convutils.pm
+++ b/src/backend/utils/mb/Unicode/convutils.pm
@@ -183,7 +183,7 @@ sub print_from_utf8_combined_map
if ($verbose && $last_comment ne "");
printf $out "\n {0x%08x, 0x%08x, 0x%04x}",
- $i->{utf8}, $i->{utf8_second}, $i->{code};
+ $i->{utf8}, $i->{utf8_second}, $i->{code};
if ($verbose >= 2)
{
$last_comment =
@@ -219,7 +219,7 @@ sub print_to_utf8_combined_map
if ($verbose && $last_comment ne "");
printf $out "\n {0x%04x, 0x%08x, 0x%08x}",
- $i->{code}, $i->{utf8}, $i->{utf8_second};
+ $i->{code}, $i->{utf8}, $i->{utf8_second};
if ($verbose >= 2)
{
@@ -321,13 +321,13 @@ sub print_radix_table
# Add the segments for the radix trees themselves.
push @segments,
- build_segments_from_tree("Single byte table", "1-byte", 1, \%b1map);
+ build_segments_from_tree("Single byte table", "1-byte", 1, \%b1map);
push @segments,
- build_segments_from_tree("Two byte table", "2-byte", 2, \%b2map);
+ build_segments_from_tree("Two byte table", "2-byte", 2, \%b2map);
push @segments,
- build_segments_from_tree("Three byte table", "3-byte", 3, \%b3map);
+ build_segments_from_tree("Three byte table", "3-byte", 3, \%b3map);
push @segments,
- build_segments_from_tree("Four byte table", "4-byte", 4, \%b4map);
+ build_segments_from_tree("Four byte table", "4-byte", 4, \%b4map);
###
### Find min and max index used in each level of each tree.
@@ -376,11 +376,11 @@ sub print_radix_table
}
unshift @segments,
- {
+ {
header => "Dummy map, for invalid values",
min_idx => 0,
max_idx => $widest_range
- };
+ };
###
### Eliminate overlapping zeros
@@ -420,9 +420,9 @@ sub print_radix_table
# How many zeros in common?
my $overlaid_trail_zeros =
- ($this_trail_zeros > $next_lead_zeros)
- ? $next_lead_zeros
- : $this_trail_zeros;
+ ($this_trail_zeros > $next_lead_zeros)
+ ? $next_lead_zeros
+ : $this_trail_zeros;
$seg->{overlaid_trail_zeros} = $overlaid_trail_zeros;
$seg->{max_idx} = $seg->{max_idx} - $overlaid_trail_zeros;
@@ -555,19 +555,19 @@ sub print_radix_table
}
printf $out "\n";
printf $out " 0x%04x, /* offset of table for 1-byte inputs */\n",
- $b1root;
+ $b1root;
printf $out " 0x%02x, /* b1_lower */\n", $b1_lower;
printf $out " 0x%02x, /* b1_upper */\n", $b1_upper;
printf $out "\n";
printf $out " 0x%04x, /* offset of table for 2-byte inputs */\n",
- $b2root;
+ $b2root;
printf $out " 0x%02x, /* b2_1_lower */\n", $b2_1_lower;
printf $out " 0x%02x, /* b2_1_upper */\n", $b2_1_upper;
printf $out " 0x%02x, /* b2_2_lower */\n", $b2_2_lower;
printf $out " 0x%02x, /* b2_2_upper */\n", $b2_2_upper;
printf $out "\n";
printf $out " 0x%04x, /* offset of table for 3-byte inputs */\n",
- $b3root;
+ $b3root;
printf $out " 0x%02x, /* b3_1_lower */\n", $b3_1_lower;
printf $out " 0x%02x, /* b3_1_upper */\n", $b3_1_upper;
printf $out " 0x%02x, /* b3_2_lower */\n", $b3_2_lower;
@@ -576,7 +576,7 @@ sub print_radix_table
printf $out " 0x%02x, /* b3_3_upper */\n", $b3_3_upper;
printf $out "\n";
printf $out " 0x%04x, /* offset of table for 3-byte inputs */\n",
- $b4root;
+ $b4root;
printf $out " 0x%02x, /* b4_1_lower */\n", $b4_1_lower;
printf $out " 0x%02x, /* b4_1_upper */\n", $b4_1_upper;
printf $out " 0x%02x, /* b4_2_lower */\n", $b4_2_lower;
@@ -648,7 +648,7 @@ sub build_segments_from_tree
# but makes the maps nicer to read.
@segments =
sort { $a->{level} cmp $b->{level} or $a->{path} cmp $b->{path} }
- @segments;
+ @segments;
}
return @segments;
@@ -664,14 +664,14 @@ sub build_segments_recurse
if ($level == $depth)
{
push @segments,
- {
+ {
header => $header . ", leaf: ${path}xx",
label => $label,
level => $level,
depth => $depth,
path => $path,
values => $map
- };
+ };
}
else
{
@@ -683,20 +683,20 @@ sub build_segments_recurse
my $childlabel = "$depth-level-$level-$childpath";
push @segments,
- build_segments_recurse($header, $childlabel, $childpath,
+ build_segments_recurse($header, $childlabel, $childpath,
$level + 1, $depth, $val);
$children{$i} = $childlabel;
}
push @segments,
- {
+ {
header => $header . ", byte #$level: ${path}xx",
label => $label,
level => $level,
depth => $depth,
path => $path,
values => \%children
- };
+ };
}
return @segments;
}
@@ -750,7 +750,7 @@ sub make_charmap
if ($verbose)
{
printf $out "0x%04x 0x%04x %s:%d %s\n", $src, $dst, $c->{f},
- $c->{l}, $c->{comment};
+ $c->{l}, $c->{comment};
}
}
if ($verbose)
@@ -817,15 +817,15 @@ sub ucs2utf
elsif ($ucs > 0x07ff && $ucs <= 0xffff)
{
$utf =
- ((($ucs >> 12) | 0xe0) << 16) |
- (((($ucs & 0x0fc0) >> 6) | 0x80) << 8) | (($ucs & 0x003f) | 0x80);
+ ((($ucs >> 12) | 0xe0) << 16) |
+ (((($ucs & 0x0fc0) >> 6) | 0x80) << 8) | (($ucs & 0x003f) | 0x80);
}
else
{
$utf =
- ((($ucs >> 18) | 0xf0) << 24) |
- (((($ucs & 0x3ffff) >> 12) | 0x80) << 16) |
- (((($ucs & 0x0fc0) >> 6) | 0x80) << 8) | (($ucs & 0x003f) | 0x80);
+ ((($ucs >> 18) | 0xf0) << 24) |
+ (((($ucs & 0x3ffff) >> 12) | 0x80) << 16) |
+ (((($ucs & 0x0fc0) >> 6) | 0x80) << 8) | (($ucs & 0x003f) | 0x80);
}
return $utf;
}
diff --git a/src/backend/utils/sort/gen_qsort_tuple.pl b/src/backend/utils/sort/gen_qsort_tuple.pl
index b6b2ffa..7003edf 100644
--- a/src/backend/utils/sort/gen_qsort_tuple.pl
+++ b/src/backend/utils/sort/gen_qsort_tuple.pl
@@ -121,124 +121,124 @@ swapfunc(SortTuple *a, SortTuple *b, size_t n)
}
#define swap(a, b) \
- do { \
- SortTuple t = *(a); \
- *(a) = *(b); \
- *(b) = t; \
- } while (0);
+do { \
+ SortTuple t = *(a); \
+ *(a) = *(b); \
+ *(b) = t; \
+} while (0);
#define vecswap(a, b, n) if ((n) > 0) swapfunc(a, b, n)
EOM
- return;
+ return;
}
sub emit_qsort_implementation
{
print <<EOM;
-static SortTuple *
-med3_$SUFFIX(SortTuple *a, SortTuple *b, SortTuple *c$EXTRAARGS)
-{
- return cmp_$SUFFIX(a, b$CMPPARAMS) < 0 ?
+ static SortTuple *
+ med3_$SUFFIX(SortTuple *a, SortTuple *b, SortTuple *c$EXTRAARGS)
+ {
+ return cmp_$SUFFIX(a, b$CMPPARAMS) < 0 ?
(cmp_$SUFFIX(b, c$CMPPARAMS) < 0 ? b :
(cmp_$SUFFIX(a, c$CMPPARAMS) < 0 ? c : a))
: (cmp_$SUFFIX(b, c$CMPPARAMS) > 0 ? b :
(cmp_$SUFFIX(a, c$CMPPARAMS) < 0 ? a : c));
-}
-
-static void
-qsort_$SUFFIX(SortTuple *a, size_t n$EXTRAARGS)
-{
- SortTuple *pa,
- *pb,
- *pc,
- *pd,
- *pl,
- *pm,
- *pn;
- size_t d1,
- d2;
- int r,
- presorted;
-
-loop:
- CHECK_FOR_INTERRUPTS();
- if (n < 7)
- {
- for (pm = a + 1; pm < a + n; pm++)
- for (pl = pm; pl > a && cmp_$SUFFIX(pl - 1, pl$CMPPARAMS) > 0; pl--)
- swap(pl, pl - 1);
- return;
}
- presorted = 1;
- for (pm = a + 1; pm < a + n; pm++)
+
+ static void
+ qsort_$SUFFIX(SortTuple *a, size_t n$EXTRAARGS)
{
+ SortTuple *pa,
+ *pb,
+ *pc,
+ *pd,
+ *pl,
+ *pm,
+ *pn;
+ size_t d1,
+ d2;
+ int r,
+ presorted;
+
+ loop:
CHECK_FOR_INTERRUPTS();
- if (cmp_$SUFFIX(pm - 1, pm$CMPPARAMS) > 0)
+ if (n < 7)
{
- presorted = 0;
- break;
+ for (pm = a + 1; pm < a + n; pm++)
+ for (pl = pm; pl > a && cmp_$SUFFIX(pl - 1, pl$CMPPARAMS) > 0; pl--)
+ swap(pl, pl - 1);
+ return;
}
- }
- if (presorted)
- return;
- pm = a + (n / 2);
- if (n > 7)
- {
- pl = a;
- pn = a + (n - 1);
- if (n > 40)
+ presorted = 1;
+ for (pm = a + 1; pm < a + n; pm++)
{
- size_t d = (n / 8);
-
- pl = med3_$SUFFIX(pl, pl + d, pl + 2 * d$EXTRAPARAMS);
- pm = med3_$SUFFIX(pm - d, pm, pm + d$EXTRAPARAMS);
- pn = med3_$SUFFIX(pn - 2 * d, pn - d, pn$EXTRAPARAMS);
+ CHECK_FOR_INTERRUPTS();
+ if (cmp_$SUFFIX(pm - 1, pm$CMPPARAMS) > 0)
+ {
+ presorted = 0;
+ break;
+ }
}
- pm = med3_$SUFFIX(pl, pm, pn$EXTRAPARAMS);
- }
- swap(a, pm);
- pa = pb = a + 1;
- pc = pd = a + (n - 1);
- for (;;)
- {
- while (pb <= pc && (r = cmp_$SUFFIX(pb, a$CMPPARAMS)) <= 0)
+ if (presorted)
+ return;
+ pm = a + (n / 2);
+ if (n > 7)
{
- if (r == 0)
+ pl = a;
+ pn = a + (n - 1);
+ if (n > 40)
{
- swap(pa, pb);
- pa++;
+ size_t d = (n / 8);
+
+ pl = med3_$SUFFIX(pl, pl + d, pl + 2 * d$EXTRAPARAMS);
+ pm = med3_$SUFFIX(pm - d, pm, pm + d$EXTRAPARAMS);
+ pn = med3_$SUFFIX(pn - 2 * d, pn - d, pn$EXTRAPARAMS);
}
- pb++;
- CHECK_FOR_INTERRUPTS();
+ pm = med3_$SUFFIX(pl, pm, pn$EXTRAPARAMS);
}
- while (pb <= pc && (r = cmp_$SUFFIX(pc, a$CMPPARAMS)) >= 0)
+ swap(a, pm);
+ pa = pb = a + 1;
+ pc = pd = a + (n - 1);
+ for (;;)
{
- if (r == 0)
+ while (pb <= pc && (r = cmp_$SUFFIX(pb, a$CMPPARAMS)) <= 0)
{
- swap(pc, pd);
- pd--;
+ if (r == 0)
+ {
+ swap(pa, pb);
+ pa++;
+ }
+ pb++;
+ CHECK_FOR_INTERRUPTS();
}
+ while (pb <= pc && (r = cmp_$SUFFIX(pc, a$CMPPARAMS)) >= 0)
+ {
+ if (r == 0)
+ {
+ swap(pc, pd);
+ pd--;
+ }
+ pc--;
+ CHECK_FOR_INTERRUPTS();
+ }
+ if (pb > pc)
+ break;
+ swap(pb, pc);
+ pb++;
pc--;
- CHECK_FOR_INTERRUPTS();
}
- if (pb > pc)
- break;
- swap(pb, pc);
- pb++;
- pc--;
- }
- pn = a + n;
- d1 = Min(pa - a, pb - pa);
- vecswap(a, pb - d1, d1);
- d1 = Min(pd - pc, pn - pd - 1);
- vecswap(pb, pn - d1, d1);
- d1 = pb - pa;
- d2 = pd - pc;
- if (d1 <= d2)
- {
- /* Recurse on left partition, then iterate on right partition */
+ pn = a + n;
+ d1 = Min(pa - a, pb - pa);
+ vecswap(a, pb - d1, d1);
+ d1 = Min(pd - pc, pn - pd - 1);
+ vecswap(pb, pn - d1, d1);
+ d1 = pb - pa;
+ d2 = pd - pc;
+ if (d1 <= d2)
+ {
+ /* Recurse on left partition, then iterate on right partition */
if (d1 > 1)
qsort_$SUFFIX(a, d1$EXTRAPARAMS);
if (d2 > 1)
diff --git a/src/bin/initdb/t/001_initdb.pl b/src/bin/initdb/t/001_initdb.pl
index 6d90db5..b966d79 100644
--- a/src/bin/initdb/t/001_initdb.pl
+++ b/src/bin/initdb/t/001_initdb.pl
@@ -74,7 +74,7 @@ command_ok([ 'initdb', '-S', $datadir ], 'sync only');
command_fails([ 'initdb', $datadir ], 'existing data directory');
# Check group access on PGDATA
-SKIP:
+ SKIP:
{
skip "unix-style permissions not supported on Windows", 2
if ($windows_os);
diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
index 7f6fd50..9b5299f 100644
--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl
+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
@@ -66,7 +66,7 @@ $node->restart;
# Write some files to test that they are not copied.
foreach my $filename (
qw(backup_label tablespace_map postgresql.auto.conf.tmp current_logfiles.tmp)
- )
+)
{
open my $file, '>>', "$pgdata/$filename";
print $file "DONOTCOPY";
@@ -106,7 +106,7 @@ $node->command_ok([ 'pg_basebackup', '-D', "$tempdir/backup", '-X', 'none' ],
ok(-f "$tempdir/backup/PG_VERSION", 'backup was created');
# Permissions on backup should be default
-SKIP:
+ SKIP:
{
skip "unix-style permissions not supported on Windows", 1
if ($windows_os);
@@ -124,7 +124,7 @@ is_deeply(
# Contents of these directories should not be copied.
foreach my $dirname (
qw(pg_dynshmem pg_notify pg_replslot pg_serial pg_snapshots pg_stat_tmp pg_subtrans)
- )
+)
{
is_deeply(
[ sort(slurp_dir("$tempdir/backup/$dirname/")) ],
@@ -210,7 +210,7 @@ unlink "$pgdata/$superlongname";
# The following tests test symlinks. Windows doesn't have symlinks, so
# skip on Windows.
-SKIP:
+ SKIP:
{
skip "symlinks not supported on Windows", 18 if ($windows_os);
@@ -291,10 +291,10 @@ SKIP:
ok(-d "$tempdir/tbackup/tblspc1", 'tablespace was relocated');
opendir(my $dh, "$pgdata/pg_tblspc") or die;
ok( ( grep {
- -l "$tempdir/backup1/pg_tblspc/$_"
- and readlink "$tempdir/backup1/pg_tblspc/$_" eq
- "$tempdir/tbackup/tblspc1"
- } readdir($dh)),
+ -l "$tempdir/backup1/pg_tblspc/$_"
+ and readlink "$tempdir/backup1/pg_tblspc/$_" eq
+ "$tempdir/tbackup/tblspc1"
+ } readdir($dh)),
"tablespace symlink was updated");
closedir $dh;
diff --git a/src/bin/pg_basebackup/t/020_pg_receivewal.pl b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
index 6e2f051..6277cf3 100644
--- a/src/bin/pg_basebackup/t/020_pg_receivewal.pl
+++ b/src/bin/pg_basebackup/t/020_pg_receivewal.pl
@@ -64,7 +64,7 @@ $primary->command_ok(
'streaming some WAL with --synchronous');
# Permissions on WAL files should be default
-SKIP:
+ SKIP:
{
skip "unix-style permissions not supported on Windows", 1
if ($windows_os);
diff --git a/src/bin/pg_ctl/t/001_start_stop.pl b/src/bin/pg_ctl/t/001_start_stop.pl
index 50a57d0..c6ee5e9 100644
--- a/src/bin/pg_ctl/t/001_start_stop.pl
+++ b/src/bin/pg_ctl/t/001_start_stop.pl
@@ -68,7 +68,7 @@ command_ok([ 'pg_ctl', 'restart', '-D', "$tempdir/data", '-l', $logFileName ],
'pg_ctl restart with server not running');
# Permissions on log file should be default
-SKIP:
+ SKIP:
{
skip "unix-style permissions not supported on Windows", 2
if ($windows_os);
@@ -80,7 +80,7 @@ SKIP:
# Log file for group access test
$logFileName = "$tempdir/data/perm-test-640.log";
-SKIP:
+ SKIP:
{
skip "group access not supported on Windows", 3 if ($windows_os);
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index ea31639..23ec7e6 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -47,10 +47,10 @@ my %pgdump_runs = (
'--binary-upgrade',
'-d', 'postgres', # alternative way to specify database
],
- restore_cmd => [
- 'pg_restore', '-Fc', '--verbose',
- "--file=$tempdir/binary_upgrade.sql",
- "$tempdir/binary_upgrade.dump",
+ restore_cmd => [
+ 'pg_restore', '-Fc', '--verbose',
+ "--file=$tempdir/binary_upgrade.sql",
+ "$tempdir/binary_upgrade.dump",
],
},
clean => {
@@ -133,10 +133,10 @@ my %pgdump_runs = (
'pg_dump', '-Fc', '-Z6',
"--file=$tempdir/defaults_custom_format.dump", 'postgres',
],
- restore_cmd => [
- 'pg_restore', '-Fc',
- "--file=$tempdir/defaults_custom_format.sql",
- "$tempdir/defaults_custom_format.dump",
+ restore_cmd => [
+ 'pg_restore', '-Fc',
+ "--file=$tempdir/defaults_custom_format.sql",
+ "$tempdir/defaults_custom_format.dump",
],
},
@@ -147,10 +147,10 @@ my %pgdump_runs = (
'pg_dump', '-Fd',
"--file=$tempdir/defaults_dir_format", 'postgres',
],
- restore_cmd => [
- 'pg_restore', '-Fd',
- "--file=$tempdir/defaults_dir_format.sql",
- "$tempdir/defaults_dir_format",
+ restore_cmd => [
+ 'pg_restore', '-Fd',
+ "--file=$tempdir/defaults_dir_format.sql",
+ "$tempdir/defaults_dir_format",
],
},
@@ -161,10 +161,10 @@ my %pgdump_runs = (
'pg_dump', '-Fd', '-j2', "--file=$tempdir/defaults_parallel",
'postgres',
],
- restore_cmd => [
- 'pg_restore',
- "--file=$tempdir/defaults_parallel.sql",
- "$tempdir/defaults_parallel",
+ restore_cmd => [
+ 'pg_restore',
+ "--file=$tempdir/defaults_parallel.sql",
+ "$tempdir/defaults_parallel",
],
},
@@ -175,11 +175,11 @@ my %pgdump_runs = (
'pg_dump', '-Ft',
"--file=$tempdir/defaults_tar_format.tar", 'postgres',
],
- restore_cmd => [
- 'pg_restore',
- '--format=tar',
- "--file=$tempdir/defaults_tar_format.sql",
- "$tempdir/defaults_tar_format.tar",
+ restore_cmd => [
+ 'pg_restore',
+ '--format=tar',
+ "--file=$tempdir/defaults_tar_format.sql",
+ "$tempdir/defaults_tar_format.tar",
],
},
exclude_dump_test_schema => {
@@ -284,9 +284,9 @@ my %pgdump_runs = (
'--schema=dump_test_second_schema',
'postgres',
],
- restore_cmd => [
- 'pg_restore', "--file=$tempdir/role_parallel.sql",
- "$tempdir/role_parallel",
+ restore_cmd => [
+ 'pg_restore', "--file=$tempdir/role_parallel.sql",
+ "$tempdir/role_parallel",
],
},
schema_only => {
@@ -389,17 +389,17 @@ my %tests = (
create_sql => 'ALTER DEFAULT PRIVILEGES
FOR ROLE regress_dump_test_role IN SCHEMA dump_test
GRANT SELECT ON TABLES TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QALTER DEFAULT PRIVILEGES \E
\QFOR ROLE regress_dump_test_role IN SCHEMA dump_test \E
\QGRANT SELECT ON TABLES TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE' => {
@@ -407,22 +407,22 @@ my %tests = (
create_sql => 'ALTER DEFAULT PRIVILEGES
FOR ROLE regress_dump_test_role
REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;',
- regexp => qr/^
+ regexp => qr/^
\QALTER DEFAULT PRIVILEGES \E
\QFOR ROLE regress_dump_test_role \E
\QREVOKE ALL ON FUNCTIONS FROM PUBLIC;\E
/xm,
- like => { %full_runs, section_post_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { no_privs => 1, },
},
'ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role REVOKE SELECT'
- => {
+ => {
create_order => 56,
create_sql => 'ALTER DEFAULT PRIVILEGES
FOR ROLE regress_dump_test_role
REVOKE SELECT ON TABLES FROM regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QALTER DEFAULT PRIVILEGES \E
\QFOR ROLE regress_dump_test_role \E
\QREVOKE ALL ON TABLES FROM regress_dump_test_role;\E\n
@@ -430,9 +430,9 @@ my %tests = (
\QFOR ROLE regress_dump_test_role \E
\QGRANT INSERT,REFERENCES,DELETE,TRIGGER,TRUNCATE,UPDATE ON TABLES TO regress_dump_test_role;\E
/xm,
- like => { %full_runs, section_post_data => 1, },
- unlike => { no_privs => 1, },
- },
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { no_privs => 1, },
+ },
'ALTER ROLE regress_dump_test_role' => {
regexp => qr/^
@@ -440,11 +440,11 @@ my %tests = (
\QNOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB NOLOGIN \E
\QNOREPLICATION NOBYPASSRLS;\E
/xm,
- like => {
- pg_dumpall_dbprivs => 1,
- pg_dumpall_globals => 1,
- pg_dumpall_globals_clean => 1,
- },
+ like => {
+ pg_dumpall_dbprivs => 1,
+ pg_dumpall_globals => 1,
+ pg_dumpall_globals_clean => 1,
+ },
},
'ALTER COLLATION test0 OWNER TO' => {
@@ -471,12 +471,12 @@ my %tests = (
\QALTER FUNCTION dump_test.pltestlang_call_handler() \E
\QOWNER TO \E
.*;/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
+ },
},
'ALTER OPERATOR FAMILY dump_test.op_family OWNER TO' => {
@@ -484,12 +484,12 @@ my %tests = (
\QALTER OPERATOR FAMILY dump_test.op_family USING btree \E
\QOWNER TO \E
.*;/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
+ },
},
'ALTER OPERATOR FAMILY dump_test.op_family USING btree' => {
@@ -503,7 +503,7 @@ my %tests = (
OPERATOR 5 >(bigint,int4),
FUNCTION 1 (int4, int4) btint4cmp(int4,int4),
FUNCTION 2 (int4, int4) btint4sortsupport(internal);',
- regexp => qr/^
+ regexp => qr/^
\QALTER OPERATOR FAMILY dump_test.op_family USING btree ADD\E\n\s+
\QOPERATOR 1 <(bigint,integer) ,\E\n\s+
\QOPERATOR 2 <=(bigint,integer) ,\E\n\s+
@@ -513,9 +513,9 @@ my %tests = (
\QFUNCTION 1 (integer, integer) btint4cmp(integer,integer) ,\E\n\s+
\QFUNCTION 2 (integer, integer) btint4sortsupport(internal);\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'ALTER OPERATOR CLASS dump_test.op_class OWNER TO' => {
@@ -523,12 +523,12 @@ my %tests = (
\QALTER OPERATOR CLASS dump_test.op_class USING btree \E
\QOWNER TO \E
.*;/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
+ },
},
'ALTER PUBLICATION pub1 OWNER TO' => {
@@ -562,10 +562,10 @@ my %tests = (
'ALTER SCHEMA dump_test OWNER TO' => {
regexp => qr/^ALTER SCHEMA dump_test OWNER TO .*;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
},
},
@@ -583,12 +583,12 @@ my %tests = (
regexp => qr/^
\QALTER SEQUENCE dump_test.test_table_col1_seq OWNED BY dump_test.test_table.col1;\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -600,12 +600,12 @@ my %tests = (
\QALTER TABLE ONLY dump_test.test_table\E \n^\s+
\QADD CONSTRAINT test_table_pkey PRIMARY KEY (col1);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -620,12 +620,12 @@ my %tests = (
CREATE TABLE dump_test.test_table_fk_1
PARTITION OF dump_test.test_table_fk
FOR VALUES FROM (0) TO (10);',
- regexp => qr/
+ regexp => qr/
\QADD CONSTRAINT test_table_fk_col1_fkey FOREIGN KEY (col1) REFERENCES dump_test.test_table\E
/xm,
- like => {
- %full_runs, %dump_test_schema_runs, section_post_data => 1,
- },
+ like => {
+ %full_runs, %dump_test_schema_runs, section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
},
@@ -635,15 +635,15 @@ my %tests = (
create_order => 93,
create_sql =>
'ALTER TABLE dump_test.test_table ALTER COLUMN col1 SET STATISTICS 90;',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col1 SET STATISTICS 90;\E\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -654,15 +654,15 @@ my %tests = (
create_order => 94,
create_sql =>
'ALTER TABLE dump_test.test_table ALTER COLUMN col2 SET STORAGE EXTERNAL;',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col2 SET STORAGE EXTERNAL;\E\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -673,15 +673,15 @@ my %tests = (
create_order => 95,
create_sql =>
'ALTER TABLE dump_test.test_table ALTER COLUMN col3 SET STORAGE MAIN;',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col3 SET STORAGE MAIN;\E\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -692,15 +692,15 @@ my %tests = (
create_order => 95,
create_sql =>
'ALTER TABLE dump_test.test_table ALTER COLUMN col4 SET (n_distinct = 10);',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.test_table ALTER COLUMN col4 SET (n_distinct=10);\E\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -708,27 +708,27 @@ my %tests = (
},
'ALTER TABLE ONLY dump_test.measurement ATTACH PARTITION measurement_y2006m2'
- => {
+ => {
regexp => qr/^
\QALTER TABLE ONLY dump_test.measurement ATTACH PARTITION dump_test_second_schema.measurement_y2006m2 \E
\QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n
/xm,
- like => { binary_upgrade => 1, },
- },
+ like => { binary_upgrade => 1, },
+ },
'ALTER TABLE test_table CLUSTER ON test_table_pkey' => {
create_order => 96,
create_sql =>
'ALTER TABLE dump_test.test_table CLUSTER ON test_table_pkey',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE dump_test.test_table CLUSTER ON test_table_pkey;\E\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -742,7 +742,7 @@ my %tests = (
\QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E
\n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n\n\n
\QALTER TABLE dump_test.test_table ENABLE TRIGGER ALL;\E/xm,
- like => { data_only => 1, },
+ like => { data_only => 1, },
},
'ALTER FOREIGN TABLE foreign_table ALTER COLUMN c1 OPTIONS' => {
@@ -751,9 +751,9 @@ my %tests = (
\s+\Qcolumn_name 'col1'\E\n
\Q);\E\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'ALTER TABLE test_table OWNER TO' => {
@@ -775,13 +775,13 @@ my %tests = (
create_order => 23,
create_sql => 'ALTER TABLE dump_test.test_table
ENABLE ROW LEVEL SECURITY;',
- regexp =>
+ regexp =>
qr/^ALTER TABLE dump_test.test_table ENABLE ROW LEVEL SECURITY;/m,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
},
unlike => {
exclude_dump_test_schema => 1,
@@ -792,30 +792,30 @@ my %tests = (
'ALTER TABLE test_second_table OWNER TO' => {
regexp => qr/^ALTER TABLE dump_test.test_second_table OWNER TO .*;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
},
},
'ALTER TABLE measurement OWNER TO' => {
regexp => qr/^ALTER TABLE dump_test.measurement OWNER TO .*;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
},
},
'ALTER TABLE measurement_y2006m2 OWNER TO' => {
regexp =>
qr/^ALTER TABLE dump_test_second_schema.measurement_y2006m2 OWNER TO .*;/m,
- like => {
- %full_runs,
- role => 1,
- section_pre_data => 1,
+ like => {
+ %full_runs,
+ role => 1,
+ section_pre_data => 1,
},
unlike => { no_owner => 1, },
},
@@ -823,35 +823,35 @@ my %tests = (
'ALTER FOREIGN TABLE foreign_table OWNER TO' => {
regexp =>
qr/^ALTER FOREIGN TABLE dump_test.foreign_table OWNER TO .*;/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
},
},
'ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 OWNER TO' => {
regexp =>
qr/^ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 OWNER TO .*;/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_owner => 1,
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_owner => 1,
},
},
'ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 OWNER TO' => {
regexp =>
qr/^ALTER TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 OWNER TO .*;/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- only_dump_test_table => 1,
- no_owner => 1,
- role => 1,
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ only_dump_test_table => 1,
+ no_owner => 1,
+ role => 1,
},
},
@@ -859,13 +859,13 @@ my %tests = (
create_order => 50,
create_sql =>
'SELECT pg_catalog.lo_from_bytea(0, \'\\x310a320a330a340a350a360a370a380a390a\');',
- regexp => qr/^SELECT pg_catalog\.lo_create\('\d+'\);/m,
- like => {
- %full_runs,
- column_inserts => 1,
- data_only => 1,
- section_pre_data => 1,
- test_schema_plus_blobs => 1,
+ regexp => qr/^SELECT pg_catalog\.lo_create\('\d+'\);/m,
+ like => {
+ %full_runs,
+ column_inserts => 1,
+ data_only => 1,
+ section_pre_data => 1,
+ test_schema_plus_blobs => 1,
},
unlike => {
schema_only => 1,
@@ -880,13 +880,13 @@ my %tests = (
\Q'\x310a320a330a340a350a360a370a380a390a');\E\n
\QSELECT pg_catalog.lo_close(0);\E
/xm,
- like => {
- %full_runs,
- column_inserts => 1,
- data_only => 1,
- section_data => 1,
- test_schema_plus_blobs => 1,
- },
+ like => {
+ %full_runs,
+ column_inserts => 1,
+ data_only => 1,
+ section_data => 1,
+ test_schema_plus_blobs => 1,
+ },
unlike => {
binary_upgrade => 1,
no_blobs => 1,
@@ -912,13 +912,13 @@ my %tests = (
create_order => 36,
create_sql => 'COMMENT ON TABLE dump_test.test_table
IS \'comment on table\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TABLE dump_test.test_table IS 'comment on table';/m,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
},
unlike => {
exclude_dump_test_schema => 1,
@@ -930,15 +930,15 @@ my %tests = (
create_order => 36,
create_sql => 'COMMENT ON COLUMN dump_test.test_table.col1
IS \'comment on column\';',
- regexp => qr/^
+ regexp => qr/^
\QCOMMENT ON COLUMN dump_test.test_table.col1 IS 'comment on column';\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -949,57 +949,57 @@ my %tests = (
create_order => 44,
create_sql => 'COMMENT ON COLUMN dump_test.composite.f1
IS \'comment on column of type\';',
- regexp => qr/^
+ regexp => qr/^
\QCOMMENT ON COLUMN dump_test.composite.f1 IS 'comment on column of type';\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON COLUMN dump_test.test_second_table.col1' => {
create_order => 63,
create_sql => 'COMMENT ON COLUMN dump_test.test_second_table.col1
IS \'comment on column col1\';',
- regexp => qr/^
+ regexp => qr/^
\QCOMMENT ON COLUMN dump_test.test_second_table.col1 IS 'comment on column col1';\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON COLUMN dump_test.test_second_table.col2' => {
create_order => 64,
create_sql => 'COMMENT ON COLUMN dump_test.test_second_table.col2
IS \'comment on column col2\';',
- regexp => qr/^
+ regexp => qr/^
\QCOMMENT ON COLUMN dump_test.test_second_table.col2 IS 'comment on column col2';\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON CONVERSION dump_test.test_conversion' => {
create_order => 79,
create_sql => 'COMMENT ON CONVERSION dump_test.test_conversion
IS \'comment on test conversion\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON CONVERSION dump_test.test_conversion IS 'comment on test conversion';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON COLLATION test0' => {
create_order => 77,
create_sql => 'COMMENT ON COLLATION test0
IS \'comment on test0 collation\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON COLLATION public.test0 IS 'comment on test0 collation';/m,
- collation => 1,
- like => { %full_runs, section_pre_data => 1, },
+ collation => 1,
+ like => { %full_runs, section_pre_data => 1, },
},
'COMMENT ON LARGE OBJECT ...' => {
@@ -1014,13 +1014,13 @@ my %tests = (
regexp => qr/^
\QCOMMENT ON LARGE OBJECT \E[0-9]+\Q IS 'comment on large object';\E
/xm,
- like => {
- %full_runs,
- column_inserts => 1,
- data_only => 1,
- section_pre_data => 1,
- test_schema_plus_blobs => 1,
- },
+ like => {
+ %full_runs,
+ column_inserts => 1,
+ data_only => 1,
+ section_pre_data => 1,
+ test_schema_plus_blobs => 1,
+ },
unlike => {
no_blobs => 1,
schema_only => 1,
@@ -1031,18 +1031,18 @@ my %tests = (
create_order => 55,
create_sql => 'COMMENT ON PUBLICATION pub1
IS \'comment on publication\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON PUBLICATION pub1 IS 'comment on publication';/m,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'COMMENT ON SUBSCRIPTION sub1' => {
create_order => 55,
create_sql => 'COMMENT ON SUBSCRIPTION sub1
IS \'comment on subscription\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON SUBSCRIPTION sub1 IS 'comment on subscription';/m,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
@@ -1050,11 +1050,11 @@ my %tests = (
create_sql =>
'COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1
IS \'comment on text search configuration\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 IS 'comment on text search configuration';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
@@ -1062,94 +1062,94 @@ my %tests = (
create_sql =>
'COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1
IS \'comment on text search dictionary\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 IS 'comment on text search dictionary';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
create_order => 84,
create_sql => 'COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1
IS \'comment on text search parser\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TEXT SEARCH PARSER dump_test.alt_ts_prs1 IS 'comment on text search parser';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
create_order => 84,
create_sql => 'COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1
IS \'comment on text search template\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 IS 'comment on text search template';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TYPE dump_test.planets - ENUM' => {
create_order => 68,
create_sql => 'COMMENT ON TYPE dump_test.planets
IS \'comment on enum type\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TYPE dump_test.planets IS 'comment on enum type';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TYPE dump_test.textrange - RANGE' => {
create_order => 69,
create_sql => 'COMMENT ON TYPE dump_test.textrange
IS \'comment on range type\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TYPE dump_test.textrange IS 'comment on range type';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TYPE dump_test.int42 - Regular' => {
create_order => 70,
create_sql => 'COMMENT ON TYPE dump_test.int42
IS \'comment on regular type\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TYPE dump_test.int42 IS 'comment on regular type';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COMMENT ON TYPE dump_test.undefined - Undefined' => {
create_order => 71,
create_sql => 'COMMENT ON TYPE dump_test.undefined
IS \'comment on undefined type\';',
- regexp =>
+ regexp =>
qr/^COMMENT ON TYPE dump_test.undefined IS 'comment on undefined type';/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'COPY test_table' => {
create_order => 4,
create_sql => 'INSERT INTO dump_test.test_table (col1) '
. 'SELECT generate_series FROM generate_series(1,9);',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.test_table (col1, col2, col3, col4) FROM stdin;\E
\n(?:\d\t\\N\t\\N\t\\N\n){9}\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- only_dump_test_table => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ only_dump_test_table => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1163,18 +1163,18 @@ my %tests = (
create_order => 22,
create_sql => 'INSERT INTO dump_test.fk_reference_test_table (col1) '
. 'SELECT generate_series FROM generate_series(1,5);',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E
\n(?:\d\n){5}\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- exclude_test_table => 1,
- exclude_test_table_data => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ exclude_test_table => 1,
+ exclude_test_table_data => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1192,7 +1192,7 @@ my %tests = (
\QCOPY dump_test.fk_reference_test_table (col1) FROM stdin;\E
\n(?:\d\n){5}\\\.\n
/xms,
- like => { data_only => 1, },
+ like => { data_only => 1, },
},
'COPY test_second_table' => {
@@ -1200,16 +1200,16 @@ my %tests = (
create_sql => 'INSERT INTO dump_test.test_second_table (col1, col2) '
. 'SELECT generate_series, generate_series::text '
. 'FROM generate_series(1,9);',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.test_second_table (col1, col2) FROM stdin;\E
\n(?:\d\t\d\n){9}\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1221,16 +1221,16 @@ my %tests = (
create_order => 7,
create_sql =>
'INSERT INTO dump_test.test_fourth_table DEFAULT VALUES;',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.test_fourth_table FROM stdin;\E
\n\n\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1242,16 +1242,16 @@ my %tests = (
create_order => 54,
create_sql =>
'INSERT INTO dump_test.test_fifth_table VALUES (NULL, true, false, \'11001\'::bit(5), \'NaN\');',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.test_fifth_table (col1, col2, col3, col4, col5) FROM stdin;\E
\n\\N\tt\tf\t11001\tNaN\n\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1263,16 +1263,16 @@ my %tests = (
create_order => 54,
create_sql =>
'INSERT INTO dump_test.test_table_identity (col2) VALUES (\'test\');',
- regexp => qr/^
+ regexp => qr/^
\QCOPY dump_test.test_table_identity (col1, col2) FROM stdin;\E
\n1\ttest\n\\\.\n
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- data_only => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ data_only => 1,
+ section_data => 1,
+ },
unlike => {
binary_upgrade => 1,
exclude_dump_test_schema => 1,
@@ -1291,25 +1291,25 @@ my %tests = (
regexp => qr/^
(?:INSERT\ INTO\ dump_test.test_second_table\ \(col1,\ col2\)
\ VALUES\ \(\d,\ '\d'\);\n){9}/xm,
- like => { column_inserts => 1, },
+ like => { column_inserts => 1, },
},
'INSERT INTO test_fourth_table' => {
regexp =>
qr/^\QINSERT INTO dump_test.test_fourth_table DEFAULT VALUES;\E/m,
- like => { column_inserts => 1, },
+ like => { column_inserts => 1, },
},
'INSERT INTO test_fifth_table' => {
regexp =>
qr/^\QINSERT INTO dump_test.test_fifth_table (col1, col2, col3, col4, col5) VALUES (NULL, true, false, B'11001', 'NaN');\E/m,
- like => { column_inserts => 1, },
+ like => { column_inserts => 1, },
},
'INSERT INTO test_table_identity' => {
regexp =>
qr/^\QINSERT INTO dump_test.test_table_identity (col1, col2) OVERRIDING SYSTEM VALUE VALUES (1, 'test');\E/m,
- like => { column_inserts => 1, },
+ like => { column_inserts => 1, },
},
'CREATE ROLE regress_dump_test_role' => {
@@ -1327,9 +1327,9 @@ my %tests = (
create_order => 52,
create_sql =>
'CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;',
- regexp =>
+ regexp =>
qr/CREATE ACCESS METHOD gist2 TYPE INDEX HANDLER gisthandler;/m,
- like => { %full_runs, section_pre_data => 1, },
+ like => { %full_runs, section_pre_data => 1, },
},
'CREATE COLLATION test0 FROM "C"' => {
@@ -1337,24 +1337,24 @@ my %tests = (
create_sql => 'CREATE COLLATION test0 FROM "C";',
regexp => qr/^
\QCREATE COLLATION public.test0 (provider = libc, locale = 'C');\E/xm,
- collation => 1,
- like => { %full_runs, section_pre_data => 1, },
+ collation => 1,
+ like => { %full_runs, section_pre_data => 1, },
},
'CREATE CAST FOR timestamptz' => {
create_order => 51,
create_sql =>
'CREATE CAST (timestamptz AS interval) WITH FUNCTION age(timestamptz) AS ASSIGNMENT;',
- regexp =>
+ regexp =>
qr/CREATE CAST \(timestamp with time zone AS interval\) WITH FUNCTION pg_catalog\.age\(timestamp with time zone\) AS ASSIGNMENT;/m,
- like => { %full_runs, section_pre_data => 1, },
+ like => { %full_runs, section_pre_data => 1, },
},
'CREATE DATABASE postgres' => {
regexp => qr/^
\QCREATE DATABASE postgres WITH TEMPLATE = template0 \E
.*;/xm,
- like => { createdb => 1, },
+ like => { createdb => 1, },
},
'CREATE DATABASE dump_test' => {
@@ -1363,7 +1363,7 @@ my %tests = (
regexp => qr/^
\QCREATE DATABASE dump_test WITH TEMPLATE = template0 \E
.*;/xm,
- like => { pg_dumpall_dbprivs => 1, },
+ like => { pg_dumpall_dbprivs => 1, },
},
'CREATE EXTENSION ... plpgsql' => {
@@ -1371,8 +1371,8 @@ my %tests = (
\QCREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;\E
/xm,
- # this shouldn't ever get emitted anymore
- like => {},
+ # this shouldn't ever get emitted anymore
+ like => {},
},
'CREATE AGGREGATE dump_test.newavg' => {
@@ -1385,7 +1385,7 @@ my %tests = (
finalfunc_modify = shareable,
initcond1 = \'{0,0}\'
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE AGGREGATE dump_test.newavg(integer) (\E
\n\s+\QSFUNC = int4_avg_accum,\E
\n\s+\QSTYPE = bigint[],\E
@@ -1393,12 +1393,12 @@ my %tests = (
\n\s+\QFINALFUNC = int8_avg,\E
\n\s+\QFINALFUNC_MODIFY = SHAREABLE\E
\n\);/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- exclude_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ exclude_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => { exclude_dump_test_schema => 1, },
},
@@ -1406,11 +1406,11 @@ my %tests = (
create_order => 78,
create_sql =>
'CREATE DEFAULT CONVERSION dump_test.test_conversion FOR \'LATIN1\' TO \'UTF8\' FROM iso8859_1_to_utf8;',
- regexp =>
+ regexp =>
qr/^\QCREATE DEFAULT CONVERSION dump_test.test_conversion FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\E/xm,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE DOMAIN dump_test.us_postal_code' => {
@@ -1419,17 +1419,17 @@ my %tests = (
COLLATE "C"
DEFAULT \'10014\'
CHECK(VALUE ~ \'^\d{5}$\' OR
- VALUE ~ \'^\d{5}-\d{4}$\');',
- regexp => qr/^
+ VALUE ~ \'^\d{5}-\d{4}$\');',
+ regexp => qr/^
\QCREATE DOMAIN dump_test.us_postal_code AS text COLLATE pg_catalog."C" DEFAULT '10014'::text\E\n\s+
\QCONSTRAINT us_postal_code_check CHECK \E
\Q(((VALUE ~ '^\d{5}\E
\$\Q'::text) OR (VALUE ~ '^\d{5}-\d{4}\E\$
\Q'::text)));\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FUNCTION dump_test.pltestlang_call_handler' => {
@@ -1437,16 +1437,16 @@ my %tests = (
create_sql => 'CREATE FUNCTION dump_test.pltestlang_call_handler()
RETURNS LANGUAGE_HANDLER AS \'$libdir/plpgsql\',
\'plpgsql_call_handler\' LANGUAGE C;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE FUNCTION dump_test.pltestlang_call_handler() \E
\QRETURNS language_handler\E
\n\s+\QLANGUAGE c\E
\n\s+AS\ \'\$
\Qlibdir\/plpgsql', 'plpgsql_call_handler';\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FUNCTION dump_test.trigger_func' => {
@@ -1454,15 +1454,15 @@ my %tests = (
create_sql => 'CREATE FUNCTION dump_test.trigger_func()
RETURNS trigger LANGUAGE plpgsql
AS $$ BEGIN RETURN NULL; END;$$;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE FUNCTION dump_test.trigger_func() RETURNS trigger\E
\n\s+\QLANGUAGE plpgsql\E
\n\s+AS\ \$\$
\Q BEGIN RETURN NULL; END;\E
\$\$;/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FUNCTION dump_test.event_trigger_func' => {
@@ -1470,27 +1470,27 @@ my %tests = (
create_sql => 'CREATE FUNCTION dump_test.event_trigger_func()
RETURNS event_trigger LANGUAGE plpgsql
AS $$ BEGIN RETURN; END;$$;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE FUNCTION dump_test.event_trigger_func() RETURNS event_trigger\E
\n\s+\QLANGUAGE plpgsql\E
\n\s+AS\ \$\$
\Q BEGIN RETURN; END;\E
\$\$;/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE OPERATOR FAMILY dump_test.op_family' => {
create_order => 73,
create_sql =>
'CREATE OPERATOR FAMILY dump_test.op_family USING btree;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE OPERATOR FAMILY dump_test.op_family USING btree;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE OPERATOR CLASS dump_test.op_class' => {
@@ -1505,7 +1505,7 @@ my %tests = (
OPERATOR 5 >(bigint,bigint),
FUNCTION 1 btint8cmp(bigint,bigint),
FUNCTION 2 btint8sortsupport(internal);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE OPERATOR CLASS dump_test.op_class\E\n\s+
\QFOR TYPE bigint USING btree FAMILY dump_test.op_family AS\E\n\s+
\QOPERATOR 1 <(bigint,bigint) ,\E\n\s+
@@ -1516,9 +1516,9 @@ my %tests = (
\QFUNCTION 1 (bigint, bigint) btint8cmp(bigint,bigint) ,\E\n\s+
\QFUNCTION 2 (bigint, bigint) btint8sortsupport(internal);\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE OPERATOR CLASS dump_test.op_class_empty' => {
@@ -1526,14 +1526,14 @@ my %tests = (
create_sql => 'CREATE OPERATOR CLASS dump_test.op_class_empty
FOR TYPE bigint USING btree FAMILY dump_test.op_family
AS STORAGE bigint;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE OPERATOR CLASS dump_test.op_class_empty\E\n\s+
\QFOR TYPE bigint USING btree FAMILY dump_test.op_family AS\E\n\s+
\QSTORAGE bigint;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE EVENT TRIGGER test_event_trigger' => {
@@ -1541,12 +1541,12 @@ my %tests = (
create_sql => 'CREATE EVENT TRIGGER test_event_trigger
ON ddl_command_start
EXECUTE PROCEDURE dump_test.event_trigger_func();',
- regexp => qr/^
+ regexp => qr/^
\QCREATE EVENT TRIGGER test_event_trigger \E
\QON ddl_command_start\E
\n\s+\QEXECUTE PROCEDURE dump_test.event_trigger_func();\E
/xm,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'CREATE TRIGGER test_trigger' => {
@@ -1555,17 +1555,17 @@ my %tests = (
BEFORE INSERT ON dump_test.test_table
FOR EACH ROW WHEN (NEW.col1 > 10)
EXECUTE PROCEDURE dump_test.trigger_func();',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TRIGGER test_trigger BEFORE INSERT ON dump_test.test_table \E
\QFOR EACH ROW WHEN ((new.col1 > 10)) \E
\QEXECUTE PROCEDURE dump_test.trigger_func();\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_test_table => 1,
exclude_dump_test_schema => 1,
@@ -1576,18 +1576,18 @@ my %tests = (
create_order => 37,
create_sql => 'CREATE TYPE dump_test.planets
AS ENUM ( \'venus\', \'earth\', \'mars\' );',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TYPE dump_test.planets AS ENUM (\E
\n\s+'venus',
\n\s+'earth',
\n\s+'mars'
\n\);/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- binary_upgrade => 1,
- exclude_dump_test_schema => 1,
- },
+ unlike => {
+ binary_upgrade => 1,
+ exclude_dump_test_schema => 1,
+ },
},
'CREATE TYPE dump_test.planets AS ENUM pg_upgrade' => {
@@ -1600,21 +1600,21 @@ my %tests = (
\n.*^
\QALTER TYPE dump_test.planets ADD VALUE 'mars';\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE TYPE dump_test.textrange AS RANGE' => {
create_order => 38,
create_sql => 'CREATE TYPE dump_test.textrange
AS RANGE (subtype=text, collation="C");',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TYPE dump_test.textrange AS RANGE (\E
\n\s+\Qsubtype = text,\E
\n\s+\Qcollation = pg_catalog."C"\E
\n\);/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TYPE dump_test.int42' => {
@@ -1622,20 +1622,20 @@ my %tests = (
create_sql => 'CREATE TYPE dump_test.int42;',
regexp => qr/^CREATE TYPE dump_test.int42;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1' => {
create_order => 80,
create_sql =>
'CREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 (copy=english);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 (\E\n
\s+\QPARSER = pg_catalog."default" );\E/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'ALTER TEXT SEARCH CONFIGURATION dump_test.alt_ts_conf1 ...' => {
@@ -1698,50 +1698,50 @@ my %tests = (
\s+\QADD MAPPING FOR uint WITH simple;\E\n
\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1' => {
create_order => 81,
create_sql =>
'CREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 (lexize=dsimple_lexize);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TEXT SEARCH TEMPLATE dump_test.alt_ts_temp1 (\E\n
\s+\QLEXIZE = dsimple_lexize );\E/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1' => {
create_order => 82,
create_sql => 'CREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1
(start = prsd_start, gettoken = prsd_nexttoken, end = prsd_end, lextypes = prsd_lextype);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TEXT SEARCH PARSER dump_test.alt_ts_prs1 (\E\n
\s+\QSTART = prsd_start,\E\n
\s+\QGETTOKEN = prsd_nexttoken,\E\n
\s+\QEND = prsd_end,\E\n
\s+\QLEXTYPES = prsd_lextype );\E\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1' => {
create_order => 83,
create_sql =>
'CREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 (template=simple);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TEXT SEARCH DICTIONARY dump_test.alt_ts_dict1 (\E\n
\s+\QTEMPLATE = pg_catalog.simple );\E\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FUNCTION dump_test.int42_in' => {
@@ -1749,14 +1749,14 @@ my %tests = (
create_sql => 'CREATE FUNCTION dump_test.int42_in(cstring)
RETURNS dump_test.int42 AS \'int4in\'
LANGUAGE internal STRICT IMMUTABLE;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE FUNCTION dump_test.int42_in(cstring) RETURNS dump_test.int42\E
\n\s+\QLANGUAGE internal IMMUTABLE STRICT\E
\n\s+AS\ \$\$int4in\$\$;
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FUNCTION dump_test.int42_out' => {
@@ -1764,28 +1764,28 @@ my %tests = (
create_sql => 'CREATE FUNCTION dump_test.int42_out(dump_test.int42)
RETURNS cstring AS \'int4out\'
LANGUAGE internal STRICT IMMUTABLE;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE FUNCTION dump_test.int42_out(dump_test.int42) RETURNS cstring\E
\n\s+\QLANGUAGE internal IMMUTABLE STRICT\E
\n\s+AS\ \$\$int4out\$\$;
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE PROCEDURE dump_test.ptest1' => {
create_order => 41,
create_sql => 'CREATE PROCEDURE dump_test.ptest1(a int)
LANGUAGE SQL AS $$ INSERT INTO dump_test.test_table (col1) VALUES (a) $$;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE PROCEDURE dump_test.ptest1(a integer)\E
\n\s+\QLANGUAGE sql\E
\n\s+AS\ \$\$\Q INSERT INTO dump_test.test_table (col1) VALUES (a) \E\$\$;
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TYPE dump_test.int42 populated' => {
@@ -1797,7 +1797,7 @@ my %tests = (
alignment = int4,
default = 42,
passedbyvalue);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TYPE dump_test.int42 (\E
\n\s+\QINTERNALLENGTH = 4,\E
\n\s+\QINPUT = dump_test.int42_in,\E
@@ -1808,8 +1808,8 @@ my %tests = (
\n\s+PASSEDBYVALUE\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TYPE dump_test.composite' => {
@@ -1818,15 +1818,15 @@ my %tests = (
f1 int,
f2 dump_test.int42
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TYPE dump_test.composite AS (\E
\n\s+\Qf1 integer,\E
\n\s+\Qf2 dump_test.int42\E
\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TYPE dump_test.undefined' => {
@@ -1834,8 +1834,8 @@ my %tests = (
create_sql => 'CREATE TYPE dump_test.undefined;',
regexp => qr/^CREATE TYPE dump_test.undefined;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE FOREIGN DATA WRAPPER dummy' => {
@@ -1857,7 +1857,7 @@ my %tests = (
create_sql =>
'CREATE FOREIGN TABLE dump_test.foreign_table (c1 int options (column_name \'col1\'))
SERVER s1 OPTIONS (schema_name \'x1\');',
- regexp => qr/
+ regexp => qr/
\QCREATE FOREIGN TABLE dump_test.foreign_table (\E\n
\s+\Qc1 integer\E\n
\Q)\E\n
@@ -1866,54 +1866,54 @@ my %tests = (
\s+\Qschema_name 'x1'\E\n
\Q);\E\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1' => {
create_order => 86,
create_sql =>
'CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;',
- regexp =>
+ regexp =>
qr/CREATE USER MAPPING FOR regress_dump_test_role SERVER s1;/m,
- like => { %full_runs, section_pre_data => 1, },
+ like => { %full_runs, section_pre_data => 1, },
},
'CREATE TRANSFORM FOR int' => {
create_order => 34,
create_sql =>
'CREATE TRANSFORM FOR int LANGUAGE SQL (FROM SQL WITH FUNCTION varchar_transform(internal), TO SQL WITH FUNCTION int4recv(internal));',
- regexp =>
+ regexp =>
qr/CREATE TRANSFORM FOR integer LANGUAGE sql \(FROM SQL WITH FUNCTION pg_catalog\.varchar_transform\(internal\), TO SQL WITH FUNCTION pg_catalog\.int4recv\(internal\)\);/m,
- like => { %full_runs, section_pre_data => 1, },
+ like => { %full_runs, section_pre_data => 1, },
},
'CREATE LANGUAGE pltestlang' => {
create_order => 18,
create_sql => 'CREATE LANGUAGE pltestlang
HANDLER dump_test.pltestlang_call_handler;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE PROCEDURAL LANGUAGE pltestlang \E
\QHANDLER dump_test.pltestlang_call_handler;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE MATERIALIZED VIEW matview' => {
create_order => 20,
create_sql => 'CREATE MATERIALIZED VIEW dump_test.matview (col1) AS
SELECT col1 FROM dump_test.test_table;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE MATERIALIZED VIEW dump_test.matview AS\E
\n\s+\QSELECT test_table.col1\E
\n\s+\QFROM dump_test.test_table\E
\n\s+\QWITH NO DATA;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE MATERIALIZED VIEW matview_second' => {
@@ -1921,15 +1921,15 @@ my %tests = (
create_sql => 'CREATE MATERIALIZED VIEW
dump_test.matview_second (col1) AS
SELECT * FROM dump_test.matview;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE MATERIALIZED VIEW dump_test.matview_second AS\E
\n\s+\QSELECT matview.col1\E
\n\s+\QFROM dump_test.matview\E
\n\s+\QWITH NO DATA;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE MATERIALIZED VIEW matview_third' => {
@@ -1937,15 +1937,15 @@ my %tests = (
create_sql => 'CREATE MATERIALIZED VIEW
dump_test.matview_third (col1) AS
SELECT * FROM dump_test.matview_second WITH NO DATA;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE MATERIALIZED VIEW dump_test.matview_third AS\E
\n\s+\QSELECT matview_second.col1\E
\n\s+\QFROM dump_test.matview_second\E
\n\s+\QWITH NO DATA;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE MATERIALIZED VIEW matview_fourth' => {
@@ -1953,15 +1953,15 @@ my %tests = (
create_sql => 'CREATE MATERIALIZED VIEW
dump_test.matview_fourth (col1) AS
SELECT * FROM dump_test.matview_third WITH NO DATA;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE MATERIALIZED VIEW dump_test.matview_fourth AS\E
\n\s+\QSELECT matview_third.col1\E
\n\s+\QFROM dump_test.matview_third\E
\n\s+\QWITH NO DATA;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE POLICY p1 ON test_table' => {
@@ -1969,16 +1969,16 @@ my %tests = (
create_sql => 'CREATE POLICY p1 ON dump_test.test_table
USING (true)
WITH CHECK (true);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p1 ON dump_test.test_table \E
\QUSING (true) WITH CHECK (true);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -1989,16 +1989,16 @@ my %tests = (
create_order => 24,
create_sql => 'CREATE POLICY p2 ON dump_test.test_table
FOR SELECT TO regress_dump_test_role USING (true);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p2 ON dump_test.test_table FOR SELECT TO regress_dump_test_role \E
\QUSING (true);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2009,16 +2009,16 @@ my %tests = (
create_order => 25,
create_sql => 'CREATE POLICY p3 ON dump_test.test_table
FOR INSERT TO regress_dump_test_role WITH CHECK (true);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p3 ON dump_test.test_table FOR INSERT \E
\QTO regress_dump_test_role WITH CHECK (true);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2029,16 +2029,16 @@ my %tests = (
create_order => 26,
create_sql => 'CREATE POLICY p4 ON dump_test.test_table FOR UPDATE
TO regress_dump_test_role USING (true) WITH CHECK (true);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p4 ON dump_test.test_table FOR UPDATE TO regress_dump_test_role \E
\QUSING (true) WITH CHECK (true);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2049,16 +2049,16 @@ my %tests = (
create_order => 27,
create_sql => 'CREATE POLICY p5 ON dump_test.test_table
FOR DELETE TO regress_dump_test_role USING (true);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p5 ON dump_test.test_table FOR DELETE \E
\QTO regress_dump_test_role USING (true);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2069,16 +2069,16 @@ my %tests = (
create_order => 27,
create_sql => 'CREATE POLICY p6 ON dump_test.test_table AS RESTRICTIVE
USING (false);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE POLICY p6 ON dump_test.test_table AS RESTRICTIVE \E
\QUSING (false);\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_post_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_post_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2091,7 +2091,7 @@ my %tests = (
regexp => qr/^
\QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
/xm,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'CREATE PUBLICATION pub2' => {
@@ -2099,10 +2099,10 @@ my %tests = (
create_sql => 'CREATE PUBLICATION pub2
FOR ALL TABLES
WITH (publish = \'\');',
- regexp => qr/^
+ regexp => qr/^
\QCREATE PUBLICATION pub2 FOR ALL TABLES WITH (publish = '');\E
/xm,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'CREATE SUBSCRIPTION sub1' => {
@@ -2110,35 +2110,35 @@ my %tests = (
create_sql => 'CREATE SUBSCRIPTION sub1
CONNECTION \'dbname=doesnotexist\' PUBLICATION pub1
WITH (connect = false);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE SUBSCRIPTION sub1 CONNECTION 'dbname=doesnotexist' PUBLICATION pub1 WITH (connect = false, slot_name = 'sub1');\E
/xm,
- like => { %full_runs, section_post_data => 1, },
+ like => { %full_runs, section_post_data => 1, },
},
'ALTER PUBLICATION pub1 ADD TABLE test_table' => {
create_order => 51,
create_sql =>
'ALTER PUBLICATION pub1 ADD TABLE dump_test.test_table;',
- regexp => qr/^
+ regexp => qr/^
\QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_table;\E
/xm,
- like => { %full_runs, section_post_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- exclude_test_table => 1,
- },
+ like => { %full_runs, section_post_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ exclude_test_table => 1,
+ },
},
'ALTER PUBLICATION pub1 ADD TABLE test_second_table' => {
create_order => 52,
create_sql =>
'ALTER PUBLICATION pub1 ADD TABLE dump_test.test_second_table;',
- regexp => qr/^
+ regexp => qr/^
\QALTER PUBLICATION pub1 ADD TABLE ONLY dump_test.test_second_table;\E
/xm,
- like => { %full_runs, section_post_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE SCHEMA public' => {
@@ -2153,8 +2153,8 @@ my %tests = (
create_sql => 'CREATE SCHEMA dump_test;',
regexp => qr/^CREATE SCHEMA dump_test;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE SCHEMA dump_test_second_schema' => {
@@ -2177,7 +2177,7 @@ my %tests = (
col4 text,
CHECK (col1 <= 1000)
) WITH (autovacuum_enabled = false, fillfactor=80);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.test_table (\E\n
\s+\Qcol1 integer NOT NULL,\E\n
\s+\Qcol2 text,\E\n
@@ -2186,12 +2186,12 @@ my %tests = (
\s+\QCONSTRAINT test_table_col1_check CHECK ((col1 <= 1000))\E\n
\Q)\E\n
\QWITH (autovacuum_enabled='false', fillfactor='80');\E\n/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1,
@@ -2203,14 +2203,14 @@ my %tests = (
create_sql => 'CREATE TABLE dump_test.fk_reference_test_table (
col1 int primary key references dump_test.test_table
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.fk_reference_test_table (\E
\n\s+\Qcol1 integer NOT NULL\E
\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TABLE test_second_table' => {
@@ -2219,15 +2219,15 @@ my %tests = (
col1 int,
col2 text
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.test_second_table (\E
\n\s+\Qcol1 integer,\E
\n\s+\Qcol2 text\E
\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TABLE measurement PARTITIONED BY' => {
@@ -2238,7 +2238,7 @@ my %tests = (
peaktemp int,
unitsales int
) PARTITION BY RANGE (logdate);',
- regexp => qr/^
+ regexp => qr/^
\Q-- Name: measurement;\E.*\n
\Q--\E\n\n
\QCREATE TABLE dump_test.measurement (\E\n
@@ -2249,12 +2249,12 @@ my %tests = (
\)\n
\QPARTITION BY RANGE (logdate);\E\n
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- binary_upgrade => 1,
- exclude_dump_test_schema => 1,
- },
+ unlike => {
+ binary_upgrade => 1,
+ exclude_dump_test_schema => 1,
+ },
},
'CREATE TABLE measurement_y2006m2 PARTITION OF' => {
@@ -2265,7 +2265,7 @@ my %tests = (
unitsales DEFAULT 0 CHECK (unitsales >= 0)
)
FOR VALUES FROM (\'2006-02-01\') TO (\'2006-03-01\');',
- regexp => qr/^
+ regexp => qr/^
\Q-- Name: measurement_y2006m2;\E.*\n
\Q--\E\n\n
\QCREATE TABLE dump_test_second_schema.measurement_y2006m2 PARTITION OF dump_test.measurement (\E\n
@@ -2273,11 +2273,11 @@ my %tests = (
\)\n
\QFOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\E\n
/xm,
- like => {
- %full_runs,
- role => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ role => 1,
+ section_pre_data => 1,
+ },
unlike => { binary_upgrade => 1, },
},
@@ -2285,13 +2285,13 @@ my %tests = (
create_order => 6,
create_sql => 'CREATE TABLE dump_test.test_fourth_table (
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.test_fourth_table (\E
\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TABLE test_fifth_table' => {
@@ -2303,7 +2303,7 @@ my %tests = (
col4 bit(5),
col5 float8
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.test_fifth_table (\E
\n\s+\Qcol1 integer,\E
\n\s+\Qcol2 boolean,\E
@@ -2313,8 +2313,8 @@ my %tests = (
\n\);
/xm,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TABLE test_table_identity' => {
@@ -2323,7 +2323,7 @@ my %tests = (
col1 int generated always as identity primary key,
col2 text
);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE dump_test.test_table_identity (\E\n
\s+\Qcol1 integer NOT NULL,\E\n
\s+\Qcol2 text\E\n
@@ -2339,8 +2339,8 @@ my %tests = (
\);
/xms,
like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE TABLE table_with_stats' => {
@@ -2356,37 +2356,37 @@ my %tests = (
ALTER COLUMN 1 SET STATISTICS 400;
ALTER INDEX dump_test.index_with_stats
ALTER COLUMN 3 SET STATISTICS 500;',
- regexp => qr/^
+ regexp => qr/^
\QALTER INDEX dump_test.index_with_stats ALTER COLUMN 1 SET STATISTICS 400;\E\n
\QALTER INDEX dump_test.index_with_stats ALTER COLUMN 3 SET STATISTICS 500;\E\n
/xms,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE STATISTICS extended_stats_no_options' => {
create_order => 97,
create_sql => 'CREATE STATISTICS dump_test.test_ext_stats_no_options
ON col1, col2 FROM dump_test.test_fifth_table',
- regexp => qr/^
+ regexp => qr/^
\QCREATE STATISTICS dump_test.test_ext_stats_no_options ON col1, col2 FROM dump_test.test_fifth_table;\E
/xms,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE STATISTICS extended_stats_options' => {
create_order => 97,
create_sql => 'CREATE STATISTICS dump_test.test_ext_stats_opts
(ndistinct) ON col1, col2 FROM dump_test.test_fifth_table',
- regexp => qr/^
+ regexp => qr/^
\QCREATE STATISTICS dump_test.test_ext_stats_opts (ndistinct) ON col1, col2 FROM dump_test.test_fifth_table;\E
/xms,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE SEQUENCE test_table_col1_seq' => {
@@ -2399,12 +2399,12 @@ my %tests = (
\n\s+\QNO MAXVALUE\E
\n\s+\QCACHE 1;\E
/xm,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
+ },
unlike => { exclude_dump_test_schema => 1, },
},
@@ -2412,7 +2412,7 @@ my %tests = (
create_order => 92,
create_sql =>
'CREATE INDEX ON dump_test.measurement (city_id, logdate);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE INDEX measurement_city_id_logdate_idx ON ONLY dump_test.measurement USING\E
/xm,
like => {
@@ -2431,7 +2431,7 @@ my %tests = (
schema_only => 1,
section_post_data => 1,
test_schema_plus_blobs => 1,
- },
+ },
unlike => {
exclude_dump_test_schema => 1,
only_dump_test_table => 1,
@@ -2448,13 +2448,13 @@ my %tests = (
create_order => 93,
create_sql =>
'ALTER TABLE dump_test.measurement ADD PRIMARY KEY (city_id, logdate);',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.measurement\E \n^\s+
\QADD CONSTRAINT measurement_pkey PRIMARY KEY (city_id, logdate);\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'CREATE INDEX ... ON measurement_y2006_m2' => {
@@ -2465,7 +2465,7 @@ my %tests = (
%full_runs,
role => 1,
section_post_data => 1,
- },
+ },
},
'ALTER INDEX ... ATTACH PARTITION' => {
@@ -2476,7 +2476,7 @@ my %tests = (
%full_runs,
role => 1,
section_post_data => 1,
- },
+ },
},
'ALTER INDEX ... ATTACH PARTITION (primary key)' => {
@@ -2501,7 +2501,7 @@ my %tests = (
role => 1,
schema_only => 1,
section_post_data => 1,
- },
+ },
unlike => {
only_dump_test_schema => 1,
only_dump_test_table => 1,
@@ -2517,25 +2517,25 @@ my %tests = (
create_sql => 'CREATE VIEW dump_test.test_view
WITH (check_option = \'local\', security_barrier = true) AS
SELECT col1 FROM dump_test.test_table;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE VIEW dump_test.test_view WITH (security_barrier='true') AS\E
\n\s+\QSELECT test_table.col1\E
\n\s+\QFROM dump_test.test_table\E
\n\s+\QWITH LOCAL CHECK OPTION;\E/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
'ALTER VIEW test_view SET DEFAULT' => {
create_order => 62,
create_sql =>
'ALTER VIEW dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;',
- regexp => qr/^
+ regexp => qr/^
\QALTER TABLE ONLY dump_test.test_view ALTER COLUMN col1 SET DEFAULT 1;\E/xm,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => { exclude_dump_test_schema => 1, },
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
},
# FIXME
@@ -2614,7 +2614,7 @@ my %tests = (
regexp => qr/^
\QDROP FUNCTION IF EXISTS dump_test.pltestlang_call_handler();\E
/xm,
- like => { clean_if_exists => 1, },
+ like => { clean_if_exists => 1, },
},
'DROP LANGUAGE IF EXISTS pltestlang' => {
@@ -2646,7 +2646,7 @@ my %tests = (
regexp => qr/^
\QDROP ROLE regress_dump_test_role;\E
/xm,
- like => { pg_dumpall_globals_clean => 1, },
+ like => { pg_dumpall_globals_clean => 1, },
},
'DROP ROLE pg_' => {
@@ -2662,14 +2662,14 @@ my %tests = (
create_order => 10,
create_sql => 'GRANT USAGE ON SCHEMA dump_test_second_schema
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT USAGE ON SCHEMA dump_test_second_schema TO regress_dump_test_role;\E
/xm,
- like => {
- %full_runs,
- role => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ role => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
},
@@ -2677,105 +2677,105 @@ my %tests = (
create_order => 85,
create_sql => 'GRANT USAGE ON FOREIGN DATA WRAPPER dummy
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON FOREIGN DATA WRAPPER dummy TO regress_dump_test_role;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'GRANT USAGE ON FOREIGN SERVER s1' => {
create_order => 85,
create_sql => 'GRANT USAGE ON FOREIGN SERVER s1
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON FOREIGN SERVER s1 TO regress_dump_test_role;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'GRANT USAGE ON DOMAIN dump_test.us_postal_code' => {
create_order => 72,
create_sql =>
'GRANT USAGE ON DOMAIN dump_test.us_postal_code TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON TYPE dump_test.us_postal_code TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT USAGE ON TYPE dump_test.int42' => {
create_order => 87,
create_sql =>
'GRANT USAGE ON TYPE dump_test.int42 TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON TYPE dump_test.int42 TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT USAGE ON TYPE dump_test.planets - ENUM' => {
create_order => 66,
create_sql =>
'GRANT USAGE ON TYPE dump_test.planets TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON TYPE dump_test.planets TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT USAGE ON TYPE dump_test.textrange - RANGE' => {
create_order => 67,
create_sql =>
'GRANT USAGE ON TYPE dump_test.textrange TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON TYPE dump_test.textrange TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT CREATE ON DATABASE dump_test' => {
create_order => 48,
create_sql =>
'GRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
/xm,
- like => { pg_dumpall_dbprivs => 1, },
+ like => { pg_dumpall_dbprivs => 1, },
},
'GRANT SELECT ON TABLE test_table' => {
create_order => 5,
create_sql => 'GRANT SELECT ON TABLE dump_test.test_table
TO regress_dump_test_role;',
- regexp =>
+ regexp =>
qr/^GRANT SELECT ON TABLE dump_test.test_table TO regress_dump_test_role;/m,
- like => {
- %full_runs,
- %dump_test_schema_runs,
- only_dump_test_table => 1,
- section_pre_data => 1,
+ like => {
+ %full_runs,
+ %dump_test_schema_runs,
+ only_dump_test_table => 1,
+ section_pre_data => 1,
},
unlike => {
exclude_dump_test_schema => 1,
@@ -2789,14 +2789,14 @@ my %tests = (
create_sql => 'GRANT SELECT ON
TABLE dump_test.measurement
TO regress_dump_test_role;',
- regexp =>
- qr/^GRANT SELECT ON TABLE dump_test.measurement TO regress_dump_test_role;/m,
- like =>
- { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ regexp =>
+ qr/^GRANT SELECT ON TABLE dump_test.measurement TO regress_dump_test_role;/m,
+ like =>
+ { %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT SELECT ON TABLE measurement_y2006m2' => {
@@ -2804,13 +2804,13 @@ my %tests = (
create_sql => 'GRANT SELECT ON
TABLE dump_test_second_schema.measurement_y2006m2
TO regress_dump_test_role;',
- regexp =>
- qr/^GRANT SELECT ON TABLE dump_test_second_schema.measurement_y2006m2 TO regress_dump_test_role;/m,
- like => {
- %full_runs,
- role => 1,
- section_pre_data => 1,
- },
+ regexp =>
+ qr/^GRANT SELECT ON TABLE dump_test_second_schema.measurement_y2006m2 TO regress_dump_test_role;/m,
+ like => {
+ %full_runs,
+ role => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
},
@@ -2826,14 +2826,14 @@ my %tests = (
regexp => qr/^
\QGRANT ALL ON LARGE OBJECT \E[0-9]+\Q TO regress_dump_test_role;\E
/xm,
- like => {
- %full_runs,
- column_inserts => 1,
- data_only => 1,
- section_pre_data => 1,
- test_schema_plus_blobs => 1,
- binary_upgrade => 1,
- },
+ like => {
+ %full_runs,
+ column_inserts => 1,
+ data_only => 1,
+ section_pre_data => 1,
+ test_schema_plus_blobs => 1,
+ binary_upgrade => 1,
+ },
unlike => {
no_blobs => 1,
no_privs => 1,
@@ -2846,26 +2846,26 @@ my %tests = (
create_sql =>
'GRANT INSERT (col1) ON TABLE dump_test.test_second_table
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT INSERT(col1) ON TABLE dump_test.test_second_table TO regress_dump_test_role;\E
/xm,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_pre_data => 1, },
- unlike => {
- exclude_dump_test_schema => 1,
- no_privs => 1,
- },
+ unlike => {
+ exclude_dump_test_schema => 1,
+ no_privs => 1,
+ },
},
'GRANT EXECUTE ON FUNCTION pg_sleep() TO regress_dump_test_role' => {
create_order => 16,
create_sql => 'GRANT EXECUTE ON FUNCTION pg_sleep(float8)
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT ALL ON FUNCTION pg_catalog.pg_sleep(double precision) TO regress_dump_test_role;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'GRANT SELECT (proname ...) ON TABLE pg_proc TO public' => {
@@ -2902,7 +2902,7 @@ my %tests = (
proconfig,
proacl
) ON TABLE pg_proc TO public;',
- regexp => qr/
+ regexp => qr/
\QGRANT SELECT(tableoid) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.*
\QGRANT SELECT(oid) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.*
\QGRANT SELECT(proname) ON TABLE pg_catalog.pg_proc TO PUBLIC;\E\n.*
@@ -2943,18 +2943,18 @@ my %tests = (
\QGRANT USAGE ON SCHEMA public TO PUBLIC;\E
/xm,
- # this shouldn't ever get emitted anymore
- like => {},
+ # this shouldn't ever get emitted anymore
+ like => {},
},
'REFRESH MATERIALIZED VIEW matview' => {
regexp => qr/^REFRESH MATERIALIZED VIEW dump_test.matview;/m,
like =>
- { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => {
- binary_upgrade => 1,
- exclude_dump_test_schema => 1,
- schema_only => 1,
+ { %full_runs, %dump_test_schema_runs, section_post_data => 1, },
+ unlike => {
+ binary_upgrade => 1,
+ exclude_dump_test_schema => 1,
+ schema_only => 1,
},
},
@@ -2964,13 +2964,13 @@ my %tests = (
\n.*
\QREFRESH MATERIALIZED VIEW dump_test.matview_second;\E
/xms,
- like =>
+ like =>
{ %full_runs, %dump_test_schema_runs, section_post_data => 1, },
- unlike => {
- binary_upgrade => 1,
- exclude_dump_test_schema => 1,
- schema_only => 1,
- },
+ unlike => {
+ binary_upgrade => 1,
+ exclude_dump_test_schema => 1,
+ schema_only => 1,
+ },
},
# FIXME
@@ -2978,7 +2978,7 @@ my %tests = (
regexp => qr/^
\QREFRESH MATERIALIZED VIEW dump_test.matview_third;\E
/xms,
- like => {},
+ like => {},
},
# FIXME
@@ -2986,7 +2986,7 @@ my %tests = (
regexp => qr/^
\QREFRESH MATERIALIZED VIEW dump_test.matview_fourth;\E
/xms,
- like => {},
+ like => {},
},
'REVOKE CONNECT ON DATABASE dump_test FROM public' => {
@@ -2997,18 +2997,18 @@ my %tests = (
\QGRANT TEMPORARY ON DATABASE dump_test TO PUBLIC;\E\n
\QGRANT CREATE ON DATABASE dump_test TO regress_dump_test_role;\E
/xm,
- like => { pg_dumpall_dbprivs => 1, },
+ like => { pg_dumpall_dbprivs => 1, },
},
'REVOKE EXECUTE ON FUNCTION pg_sleep() FROM public' => {
create_order => 15,
create_sql => 'REVOKE EXECUTE ON FUNCTION pg_sleep(float8)
FROM public;',
- regexp => qr/^
+ regexp => qr/^
\QREVOKE ALL ON FUNCTION pg_catalog.pg_sleep(double precision) FROM PUBLIC;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'REVOKE SELECT ON TABLE pg_proc FROM public' => {
@@ -3016,8 +3016,8 @@ my %tests = (
create_sql => 'REVOKE SELECT ON TABLE pg_proc FROM public;',
regexp =>
qr/^REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC;/m,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'REVOKE CREATE ON SCHEMA public FROM public' => {
@@ -3027,8 +3027,8 @@ my %tests = (
\QREVOKE ALL ON SCHEMA public FROM PUBLIC;\E
\n\QGRANT USAGE ON SCHEMA public TO PUBLIC;\E
/xm,
- like => { %full_runs, section_pre_data => 1, },
- unlike => { no_privs => 1, },
+ like => { %full_runs, section_pre_data => 1, },
+ unlike => { no_privs => 1, },
},
'REVOKE USAGE ON LANGUAGE plpgsql FROM public' => {
@@ -3295,7 +3295,7 @@ foreach my $run (sort keys %pgdump_runs)
&& !defined($tests{$test}->{unlike}->{$test_key}))
{
if (!ok($output_file =~ $tests{$test}->{regexp},
- "$run: should dump $test"))
+ "$run: should dump $test"))
{
diag("Review $run results in $tempdir");
}
@@ -3303,7 +3303,7 @@ foreach my $run (sort keys %pgdump_runs)
else
{
if (!ok($output_file !~ $tests{$test}->{regexp},
- "$run: should not dump $test"))
+ "$run: should not dump $test"))
{
diag("Review $run results in $tempdir");
}
diff --git a/src/bin/pg_dump/t/010_dump_connstr.pl b/src/bin/pg_dump/t/010_dump_connstr.pl
index c40b30f..2c09ec4 100644
--- a/src/bin/pg_dump/t/010_dump_connstr.pl
+++ b/src/bin/pg_dump/t/010_dump_connstr.pl
@@ -17,7 +17,7 @@ $ENV{PGCLIENTENCODING} = 'LATIN1';
# because of pg_regress --create-role, skip [\n\r] because pg_dumpall
# does not allow them.
my $dbname1 =
- generate_ascii_string(1, 9)
+ generate_ascii_string(1, 9)
. generate_ascii_string(11, 12)
. generate_ascii_string(14, 33)
. ($TestLib::windows_os ? '' : '"x"')
diff --git a/src/bin/pg_resetwal/t/001_basic.pl b/src/bin/pg_resetwal/t/001_basic.pl
index ca93ddb..dba1677 100644
--- a/src/bin/pg_resetwal/t/001_basic.pl
+++ b/src/bin/pg_resetwal/t/001_basic.pl
@@ -17,7 +17,7 @@ command_like([ 'pg_resetwal', '-n', $node->data_dir ],
# Permissions on PGDATA should be default
-SKIP:
+ SKIP:
{
skip "unix-style permissions not supported on Windows", 1
if ($windows_os);
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index a38f33d..21114d8 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -70,7 +70,7 @@ sub master_psql
my $cmd = shift;
system_or_bail 'psql', '-q', '--no-psqlrc', '-d',
- $node_master->connstr('postgres'), '-c', "$cmd";
+ $node_master->connstr('postgres'), '-c', "$cmd";
return;
}
@@ -79,7 +79,7 @@ sub standby_psql
my $cmd = shift;
system_or_bail 'psql', '-q', '--no-psqlrc', '-d',
- $node_standby->connstr('postgres'), '-c', "$cmd";
+ $node_standby->connstr('postgres'), '-c', "$cmd";
return;
}
@@ -97,8 +97,8 @@ sub check_query
'psql', '-q', '-A', '-t', '--no-psqlrc', '-d',
$node_master->connstr('postgres'),
'-c', $query
- ],
- '>', \$stdout, '2>', \$stderr;
+ ],
+ '>', \$stdout, '2>', \$stderr;
# We don't use ok() for the exit code and stderr, because we want this
# check to be just a single test.
@@ -265,7 +265,7 @@ sub run_pg_rewind
$node_master->group_access() ? 0640 : 0600,
"$master_pgdata/postgresql.conf")
or BAIL_OUT(
- "unable to set permissions for $master_pgdata/postgresql.conf");
+ "unable to set permissions for $master_pgdata/postgresql.conf");
# Plug-in rewound node to the now-promoted standby node
my $port_standby = $node_standby->port;
diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl
index 496f38c..03e50a3 100644
--- a/src/bin/pg_rewind/t/003_extrafiles.pl
+++ b/src/bin/pg_rewind/t/003_extrafiles.pl
@@ -25,7 +25,7 @@ sub run_test
append_to_file "$test_master_datadir/tst_both_dir/both_file2", "in both2";
mkdir "$test_master_datadir/tst_both_dir/both_subdir/";
append_to_file "$test_master_datadir/tst_both_dir/both_subdir/both_file3",
- "in both3";
+ "in both3";
RewindTest::create_standby($test_mode);
@@ -34,9 +34,9 @@ sub run_test
mkdir "$test_standby_datadir/tst_standby_dir";
append_to_file "$test_standby_datadir/tst_standby_dir/standby_file1",
- "in standby1";
+ "in standby1";
append_to_file "$test_standby_datadir/tst_standby_dir/standby_file2",
- "in standby2";
+ "in standby2";
mkdir "$test_standby_datadir/tst_standby_dir/standby_subdir/";
append_to_file
"$test_standby_datadir/tst_standby_dir/standby_subdir/standby_file3",
@@ -44,9 +44,9 @@ sub run_test
mkdir "$test_master_datadir/tst_master_dir";
append_to_file "$test_master_datadir/tst_master_dir/master_file1",
- "in master1";
+ "in master1";
append_to_file "$test_master_datadir/tst_master_dir/master_file2",
- "in master2";
+ "in master2";
mkdir "$test_master_datadir/tst_master_dir/master_subdir/";
append_to_file
"$test_master_datadir/tst_master_dir/master_subdir/master_file3",
diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl
index f818d0b..f3a5825 100644
--- a/src/bin/pgbench/t/001_pgbench_with_server.pl
+++ b/src/bin/pgbench/t/001_pgbench_with_server.pl
@@ -128,7 +128,7 @@ pgbench(
# Run all builtin scripts, for a few transactions each
pgbench(
'--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t'
- . ' --connect -n -v -n',
+ . ' --connect -n -v -n',
0,
[
qr{builtin: TPC-B},
@@ -755,7 +755,7 @@ for my $e (@errors)
$n =~ s/ /_/g;
pgbench(
'-n -t 1 -M prepared -Dfoo=bla -Dnull=null -Dtrue=true -Done=1 -Dzero=0.0 '
- . '-Dbadtrue=trueXXX -Dmaxint=9223372036854775807 -Dminint=-9223372036854775808',
+ . '-Dbadtrue=trueXXX -Dmaxint=9223372036854775807 -Dminint=-9223372036854775808',
$status,
[ $status == 1 ? qr{^$} : qr{processed: 0/1} ],
$re,
diff --git a/src/bin/psql/create_help.pl b/src/bin/psql/create_help.pl
index 08ed032..4f7c92e 100644
--- a/src/bin/psql/create_help.pl
+++ b/src/bin/psql/create_help.pl
@@ -63,28 +63,28 @@ print $hfile_handle "/*
struct _helpStruct
{
const char *cmd; /* the command name */
- const char *help; /* the help associated with it */
- void (*syntaxfunc)(PQExpBuffer); /* function that prints the syntax associated with it */
- int nl_count; /* number of newlines in syntax (for pager) */
+ const char *help; /* the help associated with it */
+ void (*syntaxfunc)(PQExpBuffer); /* function that prints the syntax associated with it */
+ int nl_count; /* number of newlines in syntax (for pager) */
};
extern const struct _helpStruct QL_HELP[];
";
print $cfile_handle "/*
- * *** Do not change this file by hand. It is automatically
- * *** generated from the DocBook documentation.
- *
- * generated by src/bin/psql/create_help.pl
- *
- */
+ * *** Do not change this file by hand. It is automatically
+ * *** generated from the DocBook documentation.
+ *
+ * generated by src/bin/psql/create_help.pl
+ *
+ */
#define N_(x) (x) /* gettext noop */
-#include \"postgres_fe.h\"
-#include \"$hfile\"
+ #include \"postgres_fe.h\"
+ #include \"$hfile\"
-";
+ ";
my $maxlen = 0;
@@ -152,7 +152,7 @@ foreach my $file (sort readdir DIR)
nl_count => $nl_count
};
$maxlen =
- ($maxlen >= length $cmdname) ? $maxlen : length $cmdname;
+ ($maxlen >= length $cmdname) ? $maxlen : length $cmdname;
}
}
else
@@ -169,19 +169,19 @@ foreach (sort keys %entries)
my $synopsis = "\"$entries{$_}{cmdsynopsis}\"";
$synopsis =~ s/\\n/\\n"\n$prefix"/g;
my @args =
- ("buf", $synopsis, map("_(\"$_\")", @{ $entries{$_}{params} }));
+ ("buf", $synopsis, map("_(\"$_\")", @{ $entries{$_}{params} }));
print $cfile_handle "static void
sql_help_$id(PQExpBuffer buf)
{
-\tappendPQExpBuffer(" . join(",\n$prefix", @args) . ");
+ \tappendPQExpBuffer(" . join(",\n$prefix", @args) . ");
}
";
}
print $cfile_handle "
-const struct _helpStruct QL_HELP[] = {
-";
+ const struct _helpStruct QL_HELP[] = {
+ ";
foreach (sort keys %entries)
{
my $id = $_;
diff --git a/src/common/unicode/generate-norm_test_table.pl b/src/common/unicode/generate-norm_test_table.pl
index e3510b5..bad6ab2 100644
--- a/src/common/unicode/generate-norm_test_table.pl
+++ b/src/common/unicode/generate-norm_test_table.pl
@@ -34,15 +34,15 @@ print $OUTPUT <<HEADER;
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/common/unicode/norm_test_table.h
- *
- *-------------------------------------------------------------------------
- */
+ *
+ *-------------------------------------------------------------------------
+ */
/*
- * File auto-generated by src/common/unicode/generate-norm_test_table.pl, do
- * not edit. There is deliberately not an #ifndef PG_NORM_TEST_TABLE_H
- * here.
- */
+ * File auto-generated by src/common/unicode/generate-norm_test_table.pl, do
+ * not edit. There is deliberately not an #ifndef PG_NORM_TEST_TABLE_H
+ * here.
+ */
typedef struct
{
@@ -52,8 +52,8 @@ typedef struct
} pg_unicode_test;
/* test table */
-HEADER
-print $OUTPUT
+ HEADER
+ print $OUTPUT
"static const pg_unicode_test UnicodeNormalizationTests[] =\n{\n";
# Helper routine to conver a space-separated list of Unicode characters to
diff --git a/src/common/unicode/generate-unicode_norm_table.pl b/src/common/unicode/generate-unicode_norm_table.pl
index f9cb406..09bd8ba 100644
--- a/src/common/unicode/generate-unicode_norm_table.pl
+++ b/src/common/unicode/generate-unicode_norm_table.pl
@@ -78,21 +78,21 @@ print $OUTPUT <<HEADER;
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/common/unicode_norm_table.h
- *
- *-------------------------------------------------------------------------
- */
+ *
+ *-------------------------------------------------------------------------
+ */
/*
- * File auto-generated by src/common/unicode/generate-unicode_norm_table.pl,
- * do not edit. There is deliberately not an #ifndef PG_UNICODE_NORM_TABLE_H
- * here.
- */
+ * File auto-generated by src/common/unicode/generate-unicode_norm_table.pl,
+ * do not edit. There is deliberately not an #ifndef PG_UNICODE_NORM_TABLE_H
+ * here.
+ */
typedef struct
{
uint32 codepoint; /* Unicode codepoint */
- uint8 comb_class; /* combining class of character */
- uint8 dec_size_flags; /* size and flags of decomposition code list */
- uint16 dec_index; /* index into UnicodeDecomp_codepoints, or the
+ uint8 comb_class; /* combining class of character */
+ uint8 dec_size_flags; /* size and flags of decomposition code list */
+ uint16 dec_index; /* index into UnicodeDecomp_codepoints, or the
* decomposition itself if DECOMP_INLINE */
} pg_unicode_decomposition;
@@ -104,128 +104,128 @@ typedef struct
#define DECOMPOSITION_IS_INLINE(x) (((x)->dec_size_flags & DECOMP_INLINE) != 0)
/* Table of Unicode codepoints and their decompositions */
-static const pg_unicode_decomposition UnicodeDecompMain[$num_characters] =
+ static const pg_unicode_decomposition UnicodeDecompMain[$num_characters] =
{
-HEADER
+ HEADER
-my $decomp_index = 0;
-my $decomp_string = "";
+ my $decomp_index = 0;
+ my $decomp_string = "";
-my $last_code = $characters[-1]->{code};
-foreach my $char (@characters)
-{
- my $code = $char->{code};
- my $class = $char->{class};
- my $decomp = $char->{decomp};
-
- # The character decomposition mapping field in UnicodeData.txt is a list
- # of unicode codepoints, separated by space. But it can be prefixed with
- # so-called compatibility formatting tag, like "<compat>", or "<font>".
- # The entries with compatibility formatting tags should not be used for
- # re-composing characters during normalization, so flag them in the table.
- # (The tag doesn't matter, only whether there is a tag or not)
- my $compat = 0;
- if ($decomp =~ /\<.*\>/)
+ my $last_code = $characters[-1]->{code};
+ foreach my $char (@characters)
{
- $compat = 1;
- $decomp =~ s/\<[^][]*\>//g;
- }
- my @decomp_elts = split(" ", $decomp);
+ my $code = $char->{code};
+ my $class = $char->{class};
+ my $decomp = $char->{decomp};
+
+ # The character decomposition mapping field in UnicodeData.txt is a list
+ # of unicode codepoints, separated by space. But it can be prefixed with
+ # so-called compatibility formatting tag, like "<compat>", or "<font>".
+ # The entries with compatibility formatting tags should not be used for
+ # re-composing characters during normalization, so flag them in the table.
+ # (The tag doesn't matter, only whether there is a tag or not)
+ my $compat = 0;
+ if ($decomp =~ /\<.*\>/)
+ {
+ $compat = 1;
+ $decomp =~ s/\<[^][]*\>//g;
+ }
+ my @decomp_elts = split(" ", $decomp);
- # Decomposition size
- # Print size of decomposition
- my $decomp_size = scalar(@decomp_elts);
+ # Decomposition size
+ # Print size of decomposition
+ my $decomp_size = scalar(@decomp_elts);
- my $first_decomp = shift @decomp_elts;
+ my $first_decomp = shift @decomp_elts;
- my $flags = "";
- my $comment = "";
+ my $flags = "";
+ my $comment = "";
- if ($decomp_size == 2)
- {
+ if ($decomp_size == 2)
+ {
+
+ # Should this be used for recomposition?
+ if ($compat)
+ {
+ $flags .= " | DECOMP_NO_COMPOSE";
+ $comment = "compatibility mapping";
+ }
+ elsif ($character_hash{$first_decomp}
+ && $character_hash{$first_decomp}->{class} != 0)
+ {
+ $flags .= " | DECOMP_NO_COMPOSE";
+ $comment = "non-starter decomposition";
+ }
+ else
+ {
+ foreach my $lcode (@composition_exclusion_codes)
+ {
+ if ($lcode eq $char->{code})
+ {
+ $flags .= " | DECOMP_NO_COMPOSE";
+ $comment = "in exclusion list";
+ last;
+ }
+ }
+ }
+ }
- # Should this be used for recomposition?
- if ($compat)
+ if ($decomp_size == 0)
{
- $flags .= " | DECOMP_NO_COMPOSE";
- $comment = "compatibility mapping";
+ print $OUTPUT "\t{0x$code, $class, 0$flags, 0}";
}
- elsif ($character_hash{$first_decomp}
- && $character_hash{$first_decomp}->{class} != 0)
+ elsif ($decomp_size == 1 && length($first_decomp) <= 4)
{
- $flags .= " | DECOMP_NO_COMPOSE";
- $comment = "non-starter decomposition";
+
+ # The decomposition consists of a single codepoint, and it fits
+ # in a uint16, so we can store it "inline" in the main table.
+ $flags .= " | DECOMP_INLINE";
+ print $OUTPUT "\t{0x$code, $class, 1$flags, 0x$first_decomp}";
}
else
{
- foreach my $lcode (@composition_exclusion_codes)
+ print $OUTPUT
+ "\t{0x$code, $class, $decomp_size$flags, $decomp_index}";
+
+ # Now save the decompositions into a dedicated area that will
+ # be written afterwards. First build the entry dedicated to
+ # a sub-table with the code and decomposition.
+ $decomp_string .= ",\n" if ($decomp_string ne "");
+
+ $decomp_string .= "\t /* $decomp_index */ 0x$first_decomp";
+ foreach (@decomp_elts)
{
- if ($lcode eq $char->{code})
- {
- $flags .= " | DECOMP_NO_COMPOSE";
- $comment = "in exclusion list";
- last;
- }
+ $decomp_string .= ", 0x$_";
}
- }
- }
- if ($decomp_size == 0)
- {
- print $OUTPUT "\t{0x$code, $class, 0$flags, 0}";
- }
- elsif ($decomp_size == 1 && length($first_decomp) <= 4)
- {
+ $decomp_index = $decomp_index + $decomp_size;
+ }
- # The decomposition consists of a single codepoint, and it fits
- # in a uint16, so we can store it "inline" in the main table.
- $flags .= " | DECOMP_INLINE";
- print $OUTPUT "\t{0x$code, $class, 1$flags, 0x$first_decomp}";
- }
- else
- {
- print $OUTPUT
- "\t{0x$code, $class, $decomp_size$flags, $decomp_index}";
+ # Print a comma after all items except the last one.
+ print $OUTPUT "," unless ($code eq $last_code);
+ if ($comment ne "")
+ {
- # Now save the decompositions into a dedicated area that will
- # be written afterwards. First build the entry dedicated to
- # a sub-table with the code and decomposition.
- $decomp_string .= ",\n" if ($decomp_string ne "");
+ # If the line is wide already, indent the comment with one tab,
+ # otherwise with two. This is to make the output match the way
+ # pgindent would mangle it. (This is quite hacky. To do this
+ # properly, we should actually track how long the line is so far,
+ # but this works for now.)
+ print $OUTPUT "\t" if ($decomp_index < 10);
- $decomp_string .= "\t /* $decomp_index */ 0x$first_decomp";
- foreach (@decomp_elts)
- {
- $decomp_string .= ", 0x$_";
+ print $OUTPUT "\t/* $comment */" if ($comment ne "");
}
-
- $decomp_index = $decomp_index + $decomp_size;
+ print $OUTPUT "\n";
}
+ print $OUTPUT "\n};\n\n";
- # Print a comma after all items except the last one.
- print $OUTPUT "," unless ($code eq $last_code);
- if ($comment ne "")
+ # Print the array of decomposed codes.
+ print $OUTPUT <<HEADER;
+ /* codepoints array */
+ static const uint32 UnicodeDecomp_codepoints[$decomp_index] =
{
+ $decomp_string
+ };
+ HEADER
- # If the line is wide already, indent the comment with one tab,
- # otherwise with two. This is to make the output match the way
- # pgindent would mangle it. (This is quite hacky. To do this
- # properly, we should actually track how long the line is so far,
- # but this works for now.)
- print $OUTPUT "\t" if ($decomp_index < 10);
-
- print $OUTPUT "\t/* $comment */" if ($comment ne "");
- }
- print $OUTPUT "\n";
-}
-print $OUTPUT "\n};\n\n";
-
-# Print the array of decomposed codes.
-print $OUTPUT <<HEADER;
-/* codepoints array */
-static const uint32 UnicodeDecomp_codepoints[$decomp_index] =
-{
-$decomp_string
-};
-HEADER
-
-close $OUTPUT;
+ close $OUTPUT;
diff --git a/src/include/catalog/reformat_dat_file.pl b/src/include/catalog/reformat_dat_file.pl
index 41c57c5..7132e42 100755
--- a/src/include/catalog/reformat_dat_file.pl
+++ b/src/include/catalog/reformat_dat_file.pl
@@ -32,7 +32,7 @@ use Catalog;
# Note: line_number is also a metadata field, but we never write it out,
# so it's not listed here.
my @METADATA =
- ('oid', 'oid_symbol', 'array_type_oid', 'descr', 'autogenerated');
+('oid', 'oid_symbol', 'array_type_oid', 'descr', 'autogenerated');
my @input_files;
my $output_path = '';
@@ -315,13 +315,13 @@ sub format_hash
sub usage
{
die <<EOM;
-Usage: reformat_dat_file.pl [options] datafile...
+ Usage: reformat_dat_file.pl [options] datafile...
-Options:
+ Options:
-o PATH write output files to PATH instead of current directory
--full-tuples write out full tuples, including default values
-Expects a list of .dat files as arguments.
+ Expects a list of .dat files as arguments.
-EOM
+ EOM
}
diff --git a/src/interfaces/ecpg/preproc/check_rules.pl b/src/interfaces/ecpg/preproc/check_rules.pl
index 8b06bd8..9e61994 100644
--- a/src/interfaces/ecpg/preproc/check_rules.pl
+++ b/src/interfaces/ecpg/preproc/check_rules.pl
@@ -37,13 +37,13 @@ if ($verbose)
my %replace_line = (
'ExecuteStmtEXECUTEnameexecute_param_clause' =>
- 'EXECUTE prepared_name execute_param_clause execute_rest',
+ 'EXECUTE prepared_name execute_param_clause execute_rest',
'ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clause'
- => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
+ => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
- 'PREPARE prepared_name prep_type_clause AS PreparableStmt');
+ 'PREPARE prepared_name prep_type_clause AS PreparableStmt');
my $block = '';
my $yaccmode = 0;
diff --git a/src/interfaces/ecpg/preproc/parse.pl b/src/interfaces/ecpg/preproc/parse.pl
index e1c0a2c..5453a61 100644
--- a/src/interfaces/ecpg/preproc/parse.pl
+++ b/src/interfaces/ecpg/preproc/parse.pl
@@ -94,17 +94,17 @@ my %replace_line = (
'VariableShowStmtSHOWvar_name' => 'SHOW var_name ecpg_into',
'VariableShowStmtSHOWTIMEZONE' => 'SHOW TIME ZONE ecpg_into',
'VariableShowStmtSHOWTRANSACTIONISOLATIONLEVEL' =>
- 'SHOW TRANSACTION ISOLATION LEVEL ecpg_into',
+ 'SHOW TRANSACTION ISOLATION LEVEL ecpg_into',
'VariableShowStmtSHOWSESSIONAUTHORIZATION' =>
- 'SHOW SESSION AUTHORIZATION ecpg_into',
+ 'SHOW SESSION AUTHORIZATION ecpg_into',
'returning_clauseRETURNINGtarget_list' =>
- 'RETURNING target_list opt_ecpg_into',
+ 'RETURNING target_list opt_ecpg_into',
'ExecuteStmtEXECUTEnameexecute_param_clause' =>
- 'EXECUTE prepared_name execute_param_clause execute_rest',
+ 'EXECUTE prepared_name execute_param_clause execute_rest',
'ExecuteStmtCREATEOptTempTABLEcreate_as_targetASEXECUTEnameexecute_param_clause'
- => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
+ => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
- 'PREPARE prepared_name prep_type_clause AS PreparableStmt',
+ 'PREPARE prepared_name prep_type_clause AS PreparableStmt',
'var_nameColId' => 'ECPGColId',);
preload_addons();
@@ -125,296 +125,296 @@ dump_buffer('trailer');
sub main
{
line: while (<>)
- {
- if (/ERRCODE_FEATURE_NOT_SUPPORTED/)
- {
- $feature_not_supported = 1;
- next line;
- }
-
- chomp;
-
- # comment out the line below to make the result file match (blank line wise)
- # the prior version.
- #next if ($_ eq '');
-
- # Dump the action for a rule -
- # stmt_mode indicates if we are processing the 'stmt:'
- # rule (mode==0 means normal, mode==1 means stmt:)
- # flds are the fields to use. These may start with a '$' - in
- # which case they are the result of a previous non-terminal
- #
- # if they don't start with a '$' then they are token name
- #
- # len is the number of fields in flds...
- # leadin is the padding to apply at the beginning (just use for formatting)
-
- if (/^%%/)
- {
- $tokenmode = 2;
- $copymode = 1;
- $yaccmode++;
- $infield = 0;
- }
-
- my $prec = 0;
-
- # Make sure any braces are split
- s/{/ { /g;
- s/}/ } /g;
-
- # Any comments are split
- s|\/\*| /* |g;
- s|\*\/| */ |g;
-
- # Now split the line into individual fields
- my @arr = split(' ');
-
- if ($arr[0] eq '%token' && $tokenmode == 0)
- {
- $tokenmode = 1;
- include_file('tokens', 'ecpg.tokens');
- }
- elsif ($arr[0] eq '%type' && $header_included == 0)
- {
- include_file('header', 'ecpg.header');
- include_file('ecpgtype', 'ecpg.type');
- $header_included = 1;
- }
-
- if ($tokenmode == 1)
- {
- my $str = '';
- my $prior = '';
- for my $a (@arr)
- {
- if ($a eq '/*')
- {
- $comment++;
- next;
- }
- if ($a eq '*/')
- {
- $comment--;
- next;
- }
- if ($comment)
- {
- next;
- }
- if (substr($a, 0, 1) eq '<')
- {
- next;
-
- # its a type
- }
- $tokens{$a} = 1;
-
- $str = $str . ' ' . $a;
- if ($a eq 'IDENT' && $prior eq '%nonassoc')
- {
-
- # add two more tokens to the list
- $str = $str . "\n%nonassoc CSTRING\n%nonassoc UIDENT";
- }
- $prior = $a;
- }
- add_to_buffer('orig_tokens', $str);
- next line;
- }
-
- # Don't worry about anything if we're not in the right section of gram.y
- if ($yaccmode != 1)
- {
- next line;
- }
-
-
- # Go through each field in turn
- for (
- my $fieldIndexer = 0;
- $fieldIndexer < scalar(@arr);
- $fieldIndexer++)
- {
- if ($arr[$fieldIndexer] eq '*/' && $comment)
- {
- $comment = 0;
- next;
- }
- elsif ($comment)
- {
- next;
- }
- elsif ($arr[$fieldIndexer] eq '/*')
- {
-
- # start of a multiline comment
- $comment = 1;
- next;
- }
- elsif ($arr[$fieldIndexer] eq '//')
- {
- next line;
- }
- elsif ($arr[$fieldIndexer] eq '}')
- {
- $brace_indent--;
- next;
- }
- elsif ($arr[$fieldIndexer] eq '{')
- {
- $brace_indent++;
- next;
- }
-
- if ($brace_indent > 0)
- {
- next;
- }
- if ($arr[$fieldIndexer] eq ';')
- {
- if ($copymode)
- {
- if ($infield)
- {
- dump_line($stmt_mode, \@fields);
- }
- add_to_buffer('rules', ";\n\n");
- }
- else
- {
- $copymode = 1;
- }
- @fields = ();
- $infield = 0;
- $line = '';
- next;
- }
-
- if ($arr[$fieldIndexer] eq '|')
- {
- if ($copymode)
- {
- if ($infield)
- {
- $infield = $infield + dump_line($stmt_mode, \@fields);
- }
- if ($infield > 1)
- {
- $line = '| ';
- }
- }
- @fields = ();
- next;
- }
-
- if (exists $replace_token{ $arr[$fieldIndexer] })
- {
- $arr[$fieldIndexer] = $replace_token{ $arr[$fieldIndexer] };
- }
-
- # Are we looking at a declaration of a non-terminal ?
- if (($arr[$fieldIndexer] =~ /[A-Za-z0-9]+:/)
- || $arr[ $fieldIndexer + 1 ] eq ':')
- {
- $non_term_id = $arr[$fieldIndexer];
- $non_term_id =~ tr/://d;
-
- if (not defined $replace_types{$non_term_id})
- {
- $replace_types{$non_term_id} = '<str>';
- $copymode = 1;
- }
- elsif ($replace_types{$non_term_id} eq 'ignore')
- {
- $copymode = 0;
- $line = '';
- next line;
- }
- $line = $line . ' ' . $arr[$fieldIndexer];
-
- # Do we have the : attached already ?
- # If yes, we'll have already printed the ':'
- if (!($arr[$fieldIndexer] =~ '[A-Za-z0-9]+:'))
- {
-
- # Consume the ':' which is next...
- $line = $line . ':';
- $fieldIndexer++;
- }
-
- # Special mode?
- if ($non_term_id eq 'stmt')
- {
- $stmt_mode = 1;
- }
- else
- {
- $stmt_mode = 0;
- }
- my $tstr =
- '%type '
- . $replace_types{$non_term_id} . ' '
- . $non_term_id;
- add_to_buffer('types', $tstr);
-
- if ($copymode)
- {
- add_to_buffer('rules', $line);
- }
- $line = '';
- @fields = ();
- $infield = 1;
- next;
- }
- elsif ($copymode)
- {
- $line = $line . ' ' . $arr[$fieldIndexer];
- }
- if ($arr[$fieldIndexer] eq '%prec')
- {
- $prec = 1;
- next;
- }
-
- if ( $copymode
- && !$prec
- && !$comment
- && length($arr[$fieldIndexer])
- && $infield)
- {
- if ($arr[$fieldIndexer] ne 'Op'
- && ( $tokens{ $arr[$fieldIndexer] } > 0
- || $arr[$fieldIndexer] =~ /'.+'/)
- || $stmt_mode == 1)
- {
- my $S;
- if (exists $replace_string{ $arr[$fieldIndexer] })
- {
- $S = $replace_string{ $arr[$fieldIndexer] };
- }
- else
- {
- $S = $arr[$fieldIndexer];
- }
- $S =~ s/_P//g;
- $S =~ tr/'//d;
- if ($stmt_mode == 1)
- {
- push(@fields, $S);
- }
- else
- {
- push(@fields, lc($S));
- }
- }
- else
- {
- push(@fields, '$' . (scalar(@fields) + 1));
- }
- }
- }
- }
+ {
+ if (/ERRCODE_FEATURE_NOT_SUPPORTED/)
+ {
+ $feature_not_supported = 1;
+ next line;
+ }
+
+ chomp;
+
+ # comment out the line below to make the result file match (blank line wise)
+ # the prior version.
+ #next if ($_ eq '');
+
+ # Dump the action for a rule -
+ # stmt_mode indicates if we are processing the 'stmt:'
+ # rule (mode==0 means normal, mode==1 means stmt:)
+ # flds are the fields to use. These may start with a '$' - in
+ # which case they are the result of a previous non-terminal
+ #
+ # if they don't start with a '$' then they are token name
+ #
+ # len is the number of fields in flds...
+ # leadin is the padding to apply at the beginning (just use for formatting)
+
+ if (/^%%/)
+ {
+ $tokenmode = 2;
+ $copymode = 1;
+ $yaccmode++;
+ $infield = 0;
+ }
+
+ my $prec = 0;
+
+ # Make sure any braces are split
+ s/{/ { /g;
+ s/}/ } /g;
+
+ # Any comments are split
+ s|\/\*| /* |g;
+ s|\*\/| */ |g;
+
+ # Now split the line into individual fields
+ my @arr = split(' ');
+
+ if ($arr[0] eq '%token' && $tokenmode == 0)
+ {
+ $tokenmode = 1;
+ include_file('tokens', 'ecpg.tokens');
+ }
+ elsif ($arr[0] eq '%type' && $header_included == 0)
+ {
+ include_file('header', 'ecpg.header');
+ include_file('ecpgtype', 'ecpg.type');
+ $header_included = 1;
+ }
+
+ if ($tokenmode == 1)
+ {
+ my $str = '';
+ my $prior = '';
+ for my $a (@arr)
+ {
+ if ($a eq '/*')
+ {
+ $comment++;
+ next;
+ }
+ if ($a eq '*/')
+ {
+ $comment--;
+ next;
+ }
+ if ($comment)
+ {
+ next;
+ }
+ if (substr($a, 0, 1) eq '<')
+ {
+ next;
+
+ # its a type
+ }
+ $tokens{$a} = 1;
+
+ $str = $str . ' ' . $a;
+ if ($a eq 'IDENT' && $prior eq '%nonassoc')
+ {
+
+ # add two more tokens to the list
+ $str = $str . "\n%nonassoc CSTRING\n%nonassoc UIDENT";
+ }
+ $prior = $a;
+ }
+ add_to_buffer('orig_tokens', $str);
+ next line;
+ }
+
+ # Don't worry about anything if we're not in the right section of gram.y
+ if ($yaccmode != 1)
+ {
+ next line;
+ }
+
+
+ # Go through each field in turn
+ for (
+ my $fieldIndexer = 0;
+ $fieldIndexer < scalar(@arr);
+ $fieldIndexer++)
+ {
+ if ($arr[$fieldIndexer] eq '*/' && $comment)
+ {
+ $comment = 0;
+ next;
+ }
+ elsif ($comment)
+ {
+ next;
+ }
+ elsif ($arr[$fieldIndexer] eq '/*')
+ {
+
+ # start of a multiline comment
+ $comment = 1;
+ next;
+ }
+ elsif ($arr[$fieldIndexer] eq '//')
+ {
+ next line;
+ }
+ elsif ($arr[$fieldIndexer] eq '}')
+ {
+ $brace_indent--;
+ next;
+ }
+ elsif ($arr[$fieldIndexer] eq '{')
+ {
+ $brace_indent++;
+ next;
+ }
+
+ if ($brace_indent > 0)
+ {
+ next;
+ }
+ if ($arr[$fieldIndexer] eq ';')
+ {
+ if ($copymode)
+ {
+ if ($infield)
+ {
+ dump_line($stmt_mode, \@fields);
+ }
+ add_to_buffer('rules', ";\n\n");
+ }
+ else
+ {
+ $copymode = 1;
+ }
+ @fields = ();
+ $infield = 0;
+ $line = '';
+ next;
+ }
+
+ if ($arr[$fieldIndexer] eq '|')
+ {
+ if ($copymode)
+ {
+ if ($infield)
+ {
+ $infield = $infield + dump_line($stmt_mode, \@fields);
+ }
+ if ($infield > 1)
+ {
+ $line = '| ';
+ }
+ }
+ @fields = ();
+ next;
+ }
+
+ if (exists $replace_token{ $arr[$fieldIndexer] })
+ {
+ $arr[$fieldIndexer] = $replace_token{ $arr[$fieldIndexer] };
+ }
+
+ # Are we looking at a declaration of a non-terminal ?
+ if (($arr[$fieldIndexer] =~ /[A-Za-z0-9]+:/)
+ || $arr[ $fieldIndexer + 1 ] eq ':')
+ {
+ $non_term_id = $arr[$fieldIndexer];
+ $non_term_id =~ tr/://d;
+
+ if (not defined $replace_types{$non_term_id})
+ {
+ $replace_types{$non_term_id} = '<str>';
+ $copymode = 1;
+ }
+ elsif ($replace_types{$non_term_id} eq 'ignore')
+ {
+ $copymode = 0;
+ $line = '';
+ next line;
+ }
+ $line = $line . ' ' . $arr[$fieldIndexer];
+
+ # Do we have the : attached already ?
+ # If yes, we'll have already printed the ':'
+ if (!($arr[$fieldIndexer] =~ '[A-Za-z0-9]+:'))
+ {
+
+ # Consume the ':' which is next...
+ $line = $line . ':';
+ $fieldIndexer++;
+ }
+
+ # Special mode?
+ if ($non_term_id eq 'stmt')
+ {
+ $stmt_mode = 1;
+ }
+ else
+ {
+ $stmt_mode = 0;
+ }
+ my $tstr =
+ '%type '
+ . $replace_types{$non_term_id} . ' '
+ . $non_term_id;
+ add_to_buffer('types', $tstr);
+
+ if ($copymode)
+ {
+ add_to_buffer('rules', $line);
+ }
+ $line = '';
+ @fields = ();
+ $infield = 1;
+ next;
+ }
+ elsif ($copymode)
+ {
+ $line = $line . ' ' . $arr[$fieldIndexer];
+ }
+ if ($arr[$fieldIndexer] eq '%prec')
+ {
+ $prec = 1;
+ next;
+ }
+
+ if ( $copymode
+ && !$prec
+ && !$comment
+ && length($arr[$fieldIndexer])
+ && $infield)
+ {
+ if ($arr[$fieldIndexer] ne 'Op'
+ && ( $tokens{ $arr[$fieldIndexer] } > 0
+ || $arr[$fieldIndexer] =~ /'.+'/)
+ || $stmt_mode == 1)
+ {
+ my $S;
+ if (exists $replace_string{ $arr[$fieldIndexer] })
+ {
+ $S = $replace_string{ $arr[$fieldIndexer] };
+ }
+ else
+ {
+ $S = $arr[$fieldIndexer];
+ }
+ $S =~ s/_P//g;
+ $S =~ tr/'//d;
+ if ($stmt_mode == 1)
+ {
+ push(@fields, $S);
+ }
+ else
+ {
+ push(@fields, lc($S));
+ }
+ }
+ else
+ {
+ push(@fields, '$' . (scalar(@fields) + 1));
+ }
+ }
+ }
+ }
return;
}
@@ -624,11 +624,11 @@ sub dump_line
}
=top
- load addons into cache
- %addons = {
- stmtClosePortalStmt => { 'type' => 'block', 'lines' => [ "{", "if (INFORMIX_MODE)" ..., "}" ] },
- stmtViewStmt => { 'type' => 'rule', 'lines' => [ "| ECPGAllocateDescr", ... ] }
- }
+ load addons into cache
+ %addons = {
+ stmtClosePortalStmt => { 'type' => 'block', 'lines' => [ "{", "if (INFORMIX_MODE)" ..., "}" ] },
+ stmtViewStmt => { 'type' => 'rule', 'lines' => [ "| ECPGAllocateDescr", ... ] }
+}
=cut
diff --git a/src/interfaces/libpq/test/regress.pl b/src/interfaces/libpq/test/regress.pl
index 3ad638a..7dd05cf 100644
--- a/src/interfaces/libpq/test/regress.pl
+++ b/src/interfaces/libpq/test/regress.pl
@@ -54,9 +54,9 @@ if ($diff_status == 0)
else
{
print <<EOF;
-FAILED: the test result differs from the expected output
+ FAILED: the test result differs from the expected output
-Review the difference in "$subdir/regress.diff"
-EOF
+ Review the difference in "$subdir/regress.diff"
+ EOF
exit 1;
}
diff --git a/src/pl/plperl/plc_perlboot.pl b/src/pl/plperl/plc_perlboot.pl
index f41aa80..0ef2d7b 100644
--- a/src/pl/plperl/plc_perlboot.pl
+++ b/src/pl/plperl/plc_perlboot.pl
@@ -45,15 +45,15 @@ sub ::encode_array_constructor
my $arg = shift;
return ::quote_nullable($arg) unless ::is_array_ref($arg);
my $res = join ", ",
- map { (ref $_) ? ::encode_array_constructor($_) : ::quote_nullable($_) }
- @$arg;
+ map { (ref $_) ? ::encode_array_constructor($_) : ::quote_nullable($_) }
+ @$arg;
return "ARRAY[$res]";
}
{
-#<<< protect next line from perltidy so perlcritic annotation works
+ #<<< protect next line from perltidy so perlcritic annotation works
package PostgreSQL::InServer; ## no critic (RequireFilenameMatchesPackage)
-#>>>
+ #>>>
use strict;
use warnings;
@@ -64,40 +64,40 @@ sub ::encode_array_constructor
&::elog(&::WARNING, $msg);
return;
}
- $SIG{__WARN__} = \&plperl_warn;
+$SIG{__WARN__} = \&plperl_warn;
- sub plperl_die
- {
- (my $msg = shift) =~ s/\(eval \d+\) //g;
- die $msg;
- }
- $SIG{__DIE__} = \&plperl_die;
+sub plperl_die
+{
+ (my $msg = shift) =~ s/\(eval \d+\) //g;
+ die $msg;
+}
+$SIG{__DIE__} = \&plperl_die;
- sub mkfuncsrc
- {
- my ($name, $imports, $prolog, $src) = @_;
+sub mkfuncsrc
+{
+ my ($name, $imports, $prolog, $src) = @_;
- my $BEGIN = join "\n", map {
- my $names = $imports->{$_} || [];
- "$_->import(qw(@$names));"
- } sort keys %$imports;
- $BEGIN &&= "BEGIN { $BEGIN }";
+ my $BEGIN = join "\n", map {
+ my $names = $imports->{$_} || [];
+ "$_->import(qw(@$names));"
+ } sort keys %$imports;
+ $BEGIN &&= "BEGIN { $BEGIN }";
- return qq[ package main; sub { $BEGIN $prolog $src } ];
- }
+ return qq[ package main; sub { $BEGIN $prolog $src } ];
+}
- sub mkfunc
- {
- ## no critic (ProhibitNoStrict, ProhibitStringyEval);
- no strict; # default to no strict for the eval
- no warnings; # default to no warnings for the eval
- my $ret = eval(mkfuncsrc(@_));
- $@ =~ s/\(eval \d+\) //g if $@;
- return $ret;
- ## use critic
- }
+sub mkfunc
+{
+ ## no critic (ProhibitNoStrict, ProhibitStringyEval);
+ no strict; # default to no strict for the eval
+ no warnings; # default to no warnings for the eval
+ my $ret = eval(mkfuncsrc(@_));
+ $@ =~ s/\(eval \d+\) //g if $@;
+ return $ret;
+ ## use critic
+}
- 1;
+1;
}
{
@@ -116,10 +116,10 @@ sub ::encode_array_constructor
return ::encode_typed_literal($self->{'array'}, $self->{'typeoid'});
}
- sub to_arr
- {
- return shift->{'array'};
- }
+sub to_arr
+{
+ return shift->{'array'};
+}
- 1;
+1;
}
diff --git a/src/pl/plperl/plperl_opmask.pl b/src/pl/plperl/plperl_opmask.pl
index e4e64b8..b1378e8 100644
--- a/src/pl/plperl/plperl_opmask.pl
+++ b/src/pl/plperl/plperl_opmask.pl
@@ -50,7 +50,7 @@ printf $fh " /* ALLOWED: @allowed_ops */ \\\n";
foreach my $opname (opset_to_ops(opset(@allowed_ops)))
{
printf $fh qq{ opmask[OP_%-12s] = 0;\t/* %s */ \\\n},
- uc($opname), opdesc($opname);
+ uc($opname), opdesc($opname);
}
printf $fh " /* end */ \n";
diff --git a/src/pl/plperl/text2macro.pl b/src/pl/plperl/text2macro.pl
index 52fcbe1..0cf3d19 100644
--- a/src/pl/plperl/text2macro.pl
+++ b/src/pl/plperl/text2macro.pl
@@ -2,47 +2,47 @@
=head1 NAME
-text2macro.pl - convert text files into C string-literal macro definitions
+ text2macro.pl - convert text files into C string-literal macro definitions
-=head1 SYNOPSIS
+ =head1 SYNOPSIS
text2macro [options] file ... > output.h
-Options:
+ Options:
--prefix=S - add prefix S to the names of the macros
--name=S - use S as the macro name (assumes only one file)
--strip=S - don't include lines that match perl regex S
-=head1 DESCRIPTION
+ =head1 DESCRIPTION
-Reads one or more text files and outputs a corresponding series of C
-pre-processor macro definitions. Each macro defines a string literal that
-contains the contents of the corresponding text file. The basename of the text
-file as capitalized and used as the name of the macro, along with an optional prefix.
+ Reads one or more text files and outputs a corresponding series of C
+ pre-processor macro definitions. Each macro defines a string literal that
+ contains the contents of the corresponding text file. The basename of the text
+ file as capitalized and used as the name of the macro, along with an optional prefix.
-=cut
+ =cut
-use strict;
+ use strict;
use warnings;
use Getopt::Long;
GetOptions(
- 'prefix=s' => \my $opt_prefix,
- 'name=s' => \my $opt_name,
- 'strip=s' => \my $opt_strip,
- 'selftest!' => sub { exit selftest() },) or exit 1;
+ 'prefix=s' => \my $opt_prefix,
+ 'name=s' => \my $opt_name,
+ 'strip=s' => \my $opt_strip,
+ 'selftest!' => sub { exit selftest() },) or exit 1;
die "No text files specified"
unless @ARGV;
print qq{
-/*
- * DO NOT EDIT - THIS FILE IS AUTOGENERATED - CHANGES WILL BE LOST
- * Generated by src/pl/plperl/text2macro.pl
- */
-};
+ /*
+ * DO NOT EDIT - THIS FILE IS AUTOGENERATED - CHANGES WILL BE LOST
+ * Generated by src/pl/plperl/text2macro.pl
+ */
+ };
for my $src_file (@ARGV)
{
@@ -61,11 +61,11 @@ for my $src_file (@ARGV)
next if $opt_strip and m/$opt_strip/o;
- # escape the text to suite C string literal rules
- s/\\/\\\\/g;
+ # escape the text to suite C string literal rules
+ s/\\/\\\\/g;
s/"/\\"/g;
- printf qq{"%s\\n" \\\n}, $_;
+ printf qq{"%s\\n" \\\n}, $_;
}
print qq{""\n\n};
}
diff --git a/src/pl/plpgsql/src/generate-plerrcodes.pl b/src/pl/plpgsql/src/generate-plerrcodes.pl
index 834cd50..75bfa1c 100644
--- a/src/pl/plpgsql/src/generate-plerrcodes.pl
+++ b/src/pl/plpgsql/src/generate-plerrcodes.pl
@@ -26,7 +26,7 @@ while (<$errcodes>)
die unless /^([^\s]{5})\s+([EWS])\s+([^\s]+)(?:\s+)?([^\s]+)?/;
(my $sqlstate, my $type, my $errcode_macro, my $condition_name) =
- ($1, $2, $3, $4);
+ ($1, $2, $3, $4);
# Skip non-errors
next unless $type eq 'E';
diff --git a/src/pl/plpython/generate-spiexceptions.pl b/src/pl/plpython/generate-spiexceptions.pl
index 73ca50e..a83382c 100644
--- a/src/pl/plpython/generate-spiexceptions.pl
+++ b/src/pl/plpython/generate-spiexceptions.pl
@@ -26,7 +26,7 @@ while (<$errcodes>)
die unless /^([^\s]{5})\s+([EWS])\s+([^\s]+)(?:\s+)?([^\s]+)?/;
(my $sqlstate, my $type, my $errcode_macro, my $condition_name) =
- ($1, $2, $3, $4);
+ ($1, $2, $3, $4);
# Skip non-errors
next unless $type eq 'E';
diff --git a/src/pl/tcl/generate-pltclerrcodes.pl b/src/pl/tcl/generate-pltclerrcodes.pl
index b5a5955..69ecd00 100644
--- a/src/pl/tcl/generate-pltclerrcodes.pl
+++ b/src/pl/tcl/generate-pltclerrcodes.pl
@@ -26,7 +26,7 @@ while (<$errcodes>)
die unless /^([^\s]{5})\s+([EWS])\s+([^\s]+)(?:\s+)?([^\s]+)?/;
(my $sqlstate, my $type, my $errcode_macro, my $condition_name) =
- ($1, $2, $3, $4);
+ ($1, $2, $3, $4);
# Skip non-errors
next unless $type eq 'E';
diff --git a/src/test/kerberos/t/001_auth.pl b/src/test/kerberos/t/001_auth.pl
index ca7c974..6e0a681 100644
--- a/src/test/kerberos/t/001_auth.pl
+++ b/src/test/kerberos/t/001_auth.pl
@@ -78,17 +78,17 @@ default = FILE:$krb5_log
kdc = FILE:$kdc_log
[libdefaults]
-default_realm = $realm
+ default_realm = $realm
[realms]
-$realm = {
- kdc = $hostaddr:$kdc_port
+ $realm = {
+ kdc = $hostaddr:$kdc_port
}!);
append_to_file(
$kdc_conf,
qq![kdcdefaults]
-!);
+ !);
# For new-enough versions of krb5, use the _listen settings rather
# than the _ports settings so that we can bind to localhost only.
@@ -112,11 +112,11 @@ append_to_file(
$kdc_conf,
qq!
[realms]
-$realm = {
- database_name = $kdc_datadir/principal
- admin_keytab = FILE:$kdc_datadir/kadm5.keytab
- acl_file = $kdc_datadir/kadm5.acl
- key_stash_file = $kdc_datadir/_k5.$realm
+ $realm = {
+ database_name = $kdc_datadir/principal
+ admin_keytab = FILE:$kdc_datadir/kadm5.keytab
+ acl_file = $kdc_datadir/kadm5.acl
+ key_stash_file = $kdc_datadir/_k5.$realm
}!);
mkdir $kdc_datadir or die;
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index fb4ecf8..989886b 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -93,10 +93,10 @@ my %pgdump_runs = (
'pg_dump', '--no-sync', '-Fc', '-Z6',
"--file=$tempdir/defaults_custom_format.dump", 'postgres',
],
- restore_cmd => [
- 'pg_restore',
- "--file=$tempdir/defaults_custom_format.sql",
- "$tempdir/defaults_custom_format.dump",
+ restore_cmd => [
+ 'pg_restore',
+ "--file=$tempdir/defaults_custom_format.sql",
+ "$tempdir/defaults_custom_format.dump",
],
},
defaults_dir_format => {
@@ -105,10 +105,10 @@ my %pgdump_runs = (
'pg_dump', '--no-sync', '-Fd',
"--file=$tempdir/defaults_dir_format", 'postgres',
],
- restore_cmd => [
- 'pg_restore',
- "--file=$tempdir/defaults_dir_format.sql",
- "$tempdir/defaults_dir_format",
+ restore_cmd => [
+ 'pg_restore',
+ "--file=$tempdir/defaults_dir_format.sql",
+ "$tempdir/defaults_dir_format",
],
},
defaults_parallel => {
@@ -117,10 +117,10 @@ my %pgdump_runs = (
'pg_dump', '--no-sync', '-Fd', '-j2',
"--file=$tempdir/defaults_parallel", 'postgres',
],
- restore_cmd => [
- 'pg_restore',
- "--file=$tempdir/defaults_parallel.sql",
- "$tempdir/defaults_parallel",
+ restore_cmd => [
+ 'pg_restore',
+ "--file=$tempdir/defaults_parallel.sql",
+ "$tempdir/defaults_parallel",
],
},
defaults_tar_format => {
@@ -129,10 +129,10 @@ my %pgdump_runs = (
'pg_dump', '--no-sync', '-Ft',
"--file=$tempdir/defaults_tar_format.tar", 'postgres',
],
- restore_cmd => [
- 'pg_restore',
- "--file=$tempdir/defaults_tar_format.sql",
- "$tempdir/defaults_tar_format.tar",
+ restore_cmd => [
+ 'pg_restore',
+ "--file=$tempdir/defaults_tar_format.sql",
+ "$tempdir/defaults_tar_format.tar",
],
},
pg_dumpall_globals => {
@@ -227,12 +227,12 @@ my %tests = (
create_order => 9,
create_sql =>
'ALTER EXTENSION test_pg_dump ADD TABLE regress_pg_dump_table_added;',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE public.regress_pg_dump_table_added (\E
\n\s+\Qcol1 integer NOT NULL,\E
\n\s+\Qcol2 integer\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE EXTENSION test_pg_dump' => {
@@ -241,11 +241,11 @@ my %tests = (
regexp => qr/^
\QCREATE EXTENSION IF NOT EXISTS test_pg_dump WITH SCHEMA public;\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
unlike => { binary_upgrade => 1, },
},
@@ -266,19 +266,19 @@ my %tests = (
\n\s+\QNO MAXVALUE\E
\n\s+\QCACHE 1;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE TABLE regress_pg_dump_table_added' => {
create_order => 7,
create_sql =>
'CREATE TABLE regress_pg_dump_table_added (col1 int not null, col2 int);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE public.regress_pg_dump_table_added (\E
\n\s+\Qcol1 integer NOT NULL,\E
\n\s+\Qcol2 integer\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE SEQUENCE regress_pg_dump_seq' => {
@@ -290,7 +290,7 @@ my %tests = (
\n\s+\QNO MAXVALUE\E
\n\s+\QCACHE 1;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'SETVAL SEQUENCE regress_seq_dumpable' => {
@@ -299,11 +299,11 @@ my %tests = (
regexp => qr/^
\QSELECT pg_catalog.setval('public.regress_seq_dumpable', 1, true);\E
\n/xm,
- like => {
- %full_runs,
- data_only => 1,
- section_data => 1,
- },
+ like => {
+ %full_runs,
+ data_only => 1,
+ section_data => 1,
+ },
},
'CREATE TABLE regress_pg_dump_table' => {
@@ -312,14 +312,14 @@ my %tests = (
\n\s+\Qcol1 integer NOT NULL,\E
\n\s+\Qcol2 integer\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE ACCESS METHOD regress_test_am' => {
regexp => qr/^
\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'COMMENT ON EXTENSION test_pg_dump' => {
@@ -327,35 +327,35 @@ my %tests = (
\QCOMMENT ON EXTENSION test_pg_dump \E
\QIS 'Test pg_dump with an extension';\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
},
'GRANT SELECT regress_pg_dump_table_added pre-ALTER EXTENSION' => {
create_order => 8,
create_sql =>
'GRANT SELECT ON regress_pg_dump_table_added TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT SELECT ON TABLE public.regress_pg_dump_table_added TO regress_dump_test_role;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'REVOKE SELECT regress_pg_dump_table_added post-ALTER EXTENSION' => {
create_order => 10,
create_sql =>
'REVOKE SELECT ON regress_pg_dump_table_added FROM regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QREVOKE SELECT ON TABLE public.regress_pg_dump_table_added FROM regress_dump_test_role;\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
},
@@ -365,7 +365,7 @@ my %tests = (
\QGRANT SELECT ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT SELECT(col1) ON regress_pg_dump_table' => {
@@ -374,60 +374,60 @@ my %tests = (
\QGRANT SELECT(col1) ON TABLE public.regress_pg_dump_table TO PUBLIC;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT SELECT(col2) ON regress_pg_dump_table TO regress_dump_test_role'
- => {
+ => {
create_order => 4,
create_sql => 'GRANT SELECT(col2) ON regress_pg_dump_table
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT SELECT(col2) ON TABLE public.regress_pg_dump_table TO regress_dump_test_role;\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
- },
+ },
'GRANT USAGE ON regress_pg_dump_table_col1_seq TO regress_dump_test_role'
- => {
+ => {
create_order => 5,
create_sql => 'GRANT USAGE ON SEQUENCE regress_pg_dump_table_col1_seq
TO regress_dump_test_role;',
- regexp => qr/^
+ regexp => qr/^
\QGRANT USAGE ON SEQUENCE public.regress_pg_dump_table_col1_seq TO regress_dump_test_role;\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
- },
+ },
'GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role' => {
regexp => qr/^
\QGRANT USAGE ON SEQUENCE public.regress_pg_dump_seq TO regress_dump_test_role;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'REVOKE SELECT(col1) ON regress_pg_dump_table' => {
create_order => 3,
create_sql => 'REVOKE SELECT(col1) ON regress_pg_dump_table
FROM PUBLIC;',
- regexp => qr/^
+ regexp => qr/^
\QREVOKE SELECT(col1) ON TABLE public.regress_pg_dump_table FROM PUBLIC;\E
\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
unlike => { no_privs => 1, },
},
@@ -438,7 +438,7 @@ my %tests = (
\n\s+\Qcol1 integer,\E
\n\s+\Qcol2 integer\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT SELECT ON regress_pg_dump_schema.test_table' => {
@@ -447,7 +447,7 @@ my %tests = (
\QGRANT SELECT ON TABLE regress_pg_dump_schema.test_table TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE SEQUENCE regress_pg_dump_schema.test_seq' => {
@@ -459,7 +459,7 @@ my %tests = (
\n\s+\QNO MAXVALUE\E
\n\s+\QCACHE 1;\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT USAGE ON regress_pg_dump_schema.test_seq' => {
@@ -468,7 +468,7 @@ my %tests = (
\QGRANT USAGE ON SEQUENCE regress_pg_dump_schema.test_seq TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE TYPE regress_pg_dump_schema.test_type' => {
@@ -476,7 +476,7 @@ my %tests = (
\QCREATE TYPE regress_pg_dump_schema.test_type AS (\E
\n\s+\Qcol1 integer\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT USAGE ON regress_pg_dump_schema.test_type' => {
@@ -485,7 +485,7 @@ my %tests = (
\QGRANT ALL ON TYPE regress_pg_dump_schema.test_type TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE FUNCTION regress_pg_dump_schema.test_func' => {
@@ -493,7 +493,7 @@ my %tests = (
\QCREATE FUNCTION regress_pg_dump_schema.test_func() RETURNS integer\E
\n\s+\QLANGUAGE sql\E
\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT ALL ON regress_pg_dump_schema.test_func' => {
@@ -502,7 +502,7 @@ my %tests = (
\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'CREATE AGGREGATE regress_pg_dump_schema.test_agg' => {
@@ -511,7 +511,7 @@ my %tests = (
\n\s+\QSFUNC = int2_sum,\E
\n\s+\QSTYPE = bigint\E
\n\);\n/xm,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
'GRANT ALL ON regress_pg_dump_schema.test_agg' => {
@@ -520,7 +520,7 @@ my %tests = (
\QGRANT ALL ON FUNCTION regress_pg_dump_schema.test_agg(smallint) TO regress_dump_test_role;\E\n
\QSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\E
\n/xms,
- like => { binary_upgrade => 1, },
+ like => { binary_upgrade => 1, },
},
# Objects not included in extension, part of schema created by extension
@@ -528,15 +528,15 @@ my %tests = (
create_order => 4,
create_sql => 'CREATE TABLE regress_pg_dump_schema.external_tab
(col1 int);',
- regexp => qr/^
+ regexp => qr/^
\QCREATE TABLE regress_pg_dump_schema.external_tab (\E
\n\s+\Qcol1 integer\E
\n\);\n/xm,
- like => {
- %full_runs,
- schema_only => 1,
- section_pre_data => 1,
- },
+ like => {
+ %full_runs,
+ schema_only => 1,
+ section_pre_data => 1,
+ },
},);
#########################################
@@ -657,7 +657,7 @@ foreach my $run (sort keys %pgdump_runs)
&& !defined($tests{$test}->{unlike}->{$test_key}))
{
if (!ok($output_file =~ $tests{$test}->{regexp},
- "$run: should dump $test"))
+ "$run: should dump $test"))
{
diag("Review $run results in $tempdir");
}
@@ -665,7 +665,7 @@ foreach my $run (sort keys %pgdump_runs)
else
{
if (!ok($output_file !~ $tests{$test}->{regexp},
- "$run: should not dump $test"))
+ "$run: should not dump $test"))
{
diag("Review $run results in $tempdir");
}
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 8a2c6fc..cc53951 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1,83 +1,83 @@
=pod
-=head1 NAME
+ =head1 NAME
-PostgresNode - class representing PostgreSQL server instance
+ PostgresNode - class representing PostgreSQL server instance
-=head1 SYNOPSIS
+ =head1 SYNOPSIS
use PostgresNode;
- my $node = PostgresNode->get_new_node('mynode');
+my $node = PostgresNode->get_new_node('mynode');
- # Create a data directory with initdb
- $node->init();
+# Create a data directory with initdb
+$node->init();
- # Start the PostgreSQL server
- $node->start();
+# Start the PostgreSQL server
+$node->start();
- # Change a setting and restart
- $node->append_conf('postgresql.conf', 'hot_standby = on');
- $node->restart();
+# Change a setting and restart
+$node->append_conf('postgresql.conf', 'hot_standby = on');
+$node->restart();
- # run a query with psql, like:
- # echo 'SELECT 1' | psql -qAXt postgres -v ON_ERROR_STOP=1
- $psql_stdout = $node->safe_psql('postgres', 'SELECT 1');
+# run a query with psql, like:
+# echo 'SELECT 1' | psql -qAXt postgres -v ON_ERROR_STOP=1
+$psql_stdout = $node->safe_psql('postgres', 'SELECT 1');
- # Run psql with a timeout, capturing stdout and stderr
- # as well as the psql exit code. Pass some extra psql
- # options. If there's an error from psql raise an exception.
- my ($stdout, $stderr, $timed_out);
- my $cmdret = $node->psql('postgres', 'SELECT pg_sleep(60)',
- stdout => \$stdout, stderr => \$stderr,
- timeout => 30, timed_out => \$timed_out,
- extra_params => ['--single-transaction'],
- on_error_die => 1)
+# Run psql with a timeout, capturing stdout and stderr
+# as well as the psql exit code. Pass some extra psql
+# options. If there's an error from psql raise an exception.
+my ($stdout, $stderr, $timed_out);
+my $cmdret = $node->psql('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'],
+ on_error_die => 1)
print "Sleep timed out" if $timed_out;
- # Similar thing, more convenient in common cases
- my ($cmdret, $stdout, $stderr) =
- $node->psql('postgres', 'SELECT 1');
+# Similar thing, more convenient in common cases
+my ($cmdret, $stdout, $stderr) =
+ $node->psql('postgres', 'SELECT 1');
- # run query every second until it returns 't'
- # or times out
- $node->poll_query_until('postgres', q|SELECT random() < 0.1;|')
- or die "timed out";
+# run query every second until it returns 't'
+# or times out
+$node->poll_query_until('postgres', q|SELECT random() < 0.1;|')
+ or die "timed out";
- # Do an online pg_basebackup
- my $ret = $node->backup('testbackup1');
+# Do an online pg_basebackup
+my $ret = $node->backup('testbackup1');
- # Take a backup of a running server
- my $ret = $node->backup_fs_hot('testbackup2');
+# Take a backup of a running server
+my $ret = $node->backup_fs_hot('testbackup2');
- # Take a backup of a stopped server
- $node->stop;
- my $ret = $node->backup_fs_cold('testbackup3')
+# Take a backup of a stopped server
+$node->stop;
+my $ret = $node->backup_fs_cold('testbackup3')
# Restore it to create a new independent node (not a replica)
my $replica = get_new_node('replica');
- $replica->init_from_backup($node, 'testbackup');
- $replica->start;
+$replica->init_from_backup($node, 'testbackup');
+$replica->start;
- # Stop the server
- $node->stop('fast');
+# Stop the server
+$node->stop('fast');
=head1 DESCRIPTION
-PostgresNode contains a set of routines able to work on a PostgreSQL node,
-allowing to start, stop, backup and initialize it with various options.
-The set of nodes managed by a given test is also managed by this module.
+ PostgresNode contains a set of routines able to work on a PostgreSQL node,
+ allowing to start, stop, backup and initialize it with various options.
+ The set of nodes managed by a given test is also managed by this module.
-In addition to node management, PostgresNode instances have some wrappers
-around Test::More functions to run commands with an environment set up to
-point to the instance.
+ In addition to node management, PostgresNode instances have some wrappers
+ around Test::More functions to run commands with an environment set up to
+ point to the instance.
-The IPC::Run module is required.
+ The IPC::Run module is required.
-=cut
+ =cut
-package PostgresNode;
+ package PostgresNode;
use strict;
use warnings;
@@ -102,7 +102,7 @@ use Scalar::Util qw(blessed);
our @EXPORT = qw(
get_new_node
-);
+ );
our ($test_localhost, $test_pghost, $last_port_assigned, @all_nodes, $died);
@@ -132,20 +132,20 @@ INIT
=pod
-=head1 METHODS
+ =head1 METHODS
-=over
+ =over
-=item PostgresNode::new($class, $name, $pghost, $pgport)
+ =item PostgresNode::new($class, $name, $pghost, $pgport)
-Create a new PostgresNode instance. Does not initdb or start it.
+ Create a new PostgresNode instance. Does not initdb or start it.
-You should generally prefer to use get_new_node() instead since it takes care
-of finding port numbers, registering instances for cleanup, etc.
+ You should generally prefer to use get_new_node() instead since it takes care
+ of finding port numbers, registering instances for cleanup, etc.
-=cut
+ =cut
-sub new
+ sub new
{
my ($class, $name, $pghost, $pgport) = @_;
my $testname = basename($0);
@@ -186,15 +186,15 @@ sub port
=pod
-=item $node->host()
+ =item $node->host()
-Return the host (like PGHOST) for this instance. May be a UNIX socket path.
+ Return the host (like PGHOST) for this instance. May be a UNIX socket path.
-Use $node->connstr() if you want a connection string.
+ Use $node->connstr() if you want a connection string.
-=cut
+ =cut
-sub host
+ sub host
{
my ($self) = @_;
return $self->{_host};
@@ -202,14 +202,14 @@ sub host
=pod
-=item $node->basedir()
+ =item $node->basedir()
-The directory all the node's files will be within - datadir, archive directory,
-backups, etc.
+ The directory all the node's files will be within - datadir, archive directory,
+ backups, etc.
-=cut
+ =cut
-sub basedir
+ sub basedir
{
my ($self) = @_;
return $self->{_basedir};
@@ -217,13 +217,13 @@ sub basedir
=pod
-=item $node->name()
+ =item $node->name()
-The name assigned to the node at creation time.
+ The name assigned to the node at creation time.
-=cut
+ =cut
-sub name
+ sub name
{
my ($self) = @_;
return $self->{_name};
@@ -231,13 +231,13 @@ sub name
=pod
-=item $node->logfile()
+ =item $node->logfile()
-Path to the PostgreSQL log file for this instance.
+ Path to the PostgreSQL log file for this instance.
-=cut
+ =cut
-sub logfile
+ sub logfile
{
my ($self) = @_;
return $self->{_logfile};
@@ -245,14 +245,14 @@ sub logfile
=pod
-=item $node->connstr()
+ =item $node->connstr()
-Get a libpq connection string that will establish a connection to
-this node. Suitable for passing to psql, DBD::Pg, etc.
+ Get a libpq connection string that will establish a connection to
+ this node. Suitable for passing to psql, DBD::Pg, etc.
-=cut
+ =cut
-sub connstr
+ sub connstr
{
my ($self, $dbname) = @_;
my $pgport = $self->port;
@@ -272,13 +272,13 @@ sub connstr
=pod
-=item $node->group_access()
+ =item $node->group_access()
-Does the data dir allow group access?
+ Does the data dir allow group access?
-=cut
+ =cut
-sub group_access
+ sub group_access
{
my ($self) = @_;
@@ -292,14 +292,14 @@ sub group_access
=pod
-=item $node->data_dir()
+ =item $node->data_dir()
-Returns the path to the data directory. postgresql.conf and pg_hba.conf are
-always here.
+ Returns the path to the data directory. postgresql.conf and pg_hba.conf are
+ always here.
-=cut
+ =cut
-sub data_dir
+ sub data_dir
{
my ($self) = @_;
my $res = $self->basedir;
@@ -308,13 +308,13 @@ sub data_dir
=pod
-=item $node->archive_dir()
+ =item $node->archive_dir()
-If archiving is enabled, WAL files go here.
+ If archiving is enabled, WAL files go here.
-=cut
+ =cut
-sub archive_dir
+ sub archive_dir
{
my ($self) = @_;
my $basedir = $self->basedir;
@@ -323,13 +323,13 @@ sub archive_dir
=pod
-=item $node->backup_dir()
+ =item $node->backup_dir()
-The output path for backups taken with $node->backup()
+ The output path for backups taken with $node->backup()
-=cut
+ =cut
-sub backup_dir
+ sub backup_dir
{
my ($self) = @_;
my $basedir = $self->basedir;
@@ -338,14 +338,14 @@ sub backup_dir
=pod
-=item $node->info()
+ =item $node->info()
-Return a string containing human-readable diagnostic information (paths, etc)
-about this node.
+ Return a string containing human-readable diagnostic information (paths, etc)
+ about this node.
-=cut
+ =cut
-sub info
+ sub info
{
my ($self) = @_;
my $_info = '';
@@ -362,13 +362,13 @@ sub info
=pod
-=item $node->dump_info()
+ =item $node->dump_info()
-Print $node->info()
+ Print $node->info()
-=cut
+ =cut
-sub dump_info
+ sub dump_info
{
my ($self) = @_;
print $self->info;
@@ -399,30 +399,30 @@ sub set_replication_conf
=pod
-=item $node->init(...)
+ =item $node->init(...)
-Initialize a new cluster for testing.
+ Initialize a new cluster for testing.
-Authentication is set up so that only the current OS user can access the
-cluster. On Unix, we use Unix domain socket connections, with the socket in
-a directory that's only accessible to the current user to ensure that.
-On Windows, we use SSPI authentication to ensure the same (by pg_regress
---config-auth).
+ Authentication is set up so that only the current OS user can access the
+ cluster. On Unix, we use Unix domain socket connections, with the socket in
+ a directory that's only accessible to the current user to ensure that.
+ On Windows, we use SSPI authentication to ensure the same (by pg_regress
+ --config-auth).
-WAL archiving can be enabled on this node by passing the keyword parameter
-has_archiving => 1. This is disabled by default.
+ WAL archiving can be enabled on this node by passing the keyword parameter
+ has_archiving => 1. This is disabled by default.
-postgresql.conf can be set up for replication by passing the keyword
-parameter allows_streaming => 'logical' or 'physical' (passing 1 will also
-suffice for physical replication) depending on type of replication that
-should be enabled. This is disabled by default.
+ postgresql.conf can be set up for replication by passing the keyword
+ parameter allows_streaming => 'logical' or 'physical' (passing 1 will also
+ suffice for physical replication) depending on type of replication that
+ should be enabled. This is disabled by default.
-The new node is set up in a fast but unsafe configuration where fsync is
-disabled.
+ The new node is set up in a fast but unsafe configuration where fsync is
+ disabled.
-=cut
+ =cut
-sub init
+ sub init
{
my ($self, %params) = @_;
my $port = $self->port;
@@ -494,18 +494,18 @@ sub init
=pod
-=item $node->append_conf(filename, str)
+ =item $node->append_conf(filename, str)
-A shortcut method to append to files like pg_hba.conf and postgresql.conf.
+ A shortcut method to append to files like pg_hba.conf and postgresql.conf.
-Does no validation or sanity checking. Does not reload the configuration
-after writing.
+ Does no validation or sanity checking. Does not reload the configuration
+ after writing.
-A newline is automatically appended to the string.
+ A newline is automatically appended to the string.
-=cut
+ =cut
-sub append_conf
+ sub append_conf
{
my ($self, $filename, $str) = @_;
@@ -521,18 +521,18 @@ sub append_conf
=pod
-=item $node->backup(backup_name)
+ =item $node->backup(backup_name)
-Create a hot backup with B<pg_basebackup> in subdirectory B<backup_name> of
-B<< $node->backup_dir >>, including the WAL. WAL files
-fetched at the end of the backup, not streamed.
+ Create a hot backup with B<pg_basebackup> in subdirectory B<backup_name> of
+ B<< $node->backup_dir >>, including the WAL. WAL files
+ fetched at the end of the backup, not streamed.
-You'll have to configure a suitable B<max_wal_senders> on the
-target server since it isn't done by default.
+ You'll have to configure a suitable B<max_wal_senders> on the
+ target server since it isn't done by default.
-=cut
+ =cut
-sub backup
+ sub backup
{
my ($self, $backup_name) = @_;
my $backup_path = $self->backup_dir . '/' . $backup_name;
@@ -548,17 +548,17 @@ sub backup
=item $node->backup_fs_hot(backup_name)
-Create a backup with a filesystem level copy in subdirectory B<backup_name> of
-B<< $node->backup_dir >>, including WAL.
+ Create a backup with a filesystem level copy in subdirectory B<backup_name> of
+ B<< $node->backup_dir >>, including WAL.
-Archiving must be enabled, as B<pg_start_backup()> and B<pg_stop_backup()> are
-used. This is not checked or enforced.
+ Archiving must be enabled, as B<pg_start_backup()> and B<pg_stop_backup()> are
+ used. This is not checked or enforced.
-The backup name is passed as the backup label to B<pg_start_backup()>.
+ The backup name is passed as the backup label to B<pg_start_backup()>.
-=cut
+ =cut
-sub backup_fs_hot
+ sub backup_fs_hot
{
my ($self, $backup_name) = @_;
$self->_backup_fs($backup_name, 1);
@@ -567,15 +567,15 @@ sub backup_fs_hot
=item $node->backup_fs_cold(backup_name)
-Create a backup with a filesystem level copy in subdirectory B<backup_name> of
-B<< $node->backup_dir >>, including WAL. The server must be
-stopped as no attempt to handle concurrent writes is made.
+ Create a backup with a filesystem level copy in subdirectory B<backup_name> of
+ B<< $node->backup_dir >>, including WAL. The server must be
+ stopped as no attempt to handle concurrent writes is made.
-Use B<backup> or B<backup_fs_hot> if you want to back up a running server.
+ Use B<backup> or B<backup_fs_hot> if you want to back up a running server.
-=cut
+ =cut
-sub backup_fs_cold
+ sub backup_fs_cold
{
my ($self, $backup_name) = @_;
$self->_backup_fs($backup_name, 0);
@@ -627,27 +627,27 @@ sub _backup_fs
=pod
-=item $node->init_from_backup(root_node, backup_name)
+ =item $node->init_from_backup(root_node, backup_name)
-Initialize a node from a backup, which may come from this node or a different
-node. root_node must be a PostgresNode reference, backup_name the string name
-of a backup previously created on that node with $node->backup.
+ Initialize a node from a backup, which may come from this node or a different
+ node. root_node must be a PostgresNode reference, backup_name the string name
+ of a backup previously created on that node with $node->backup.
-Does not start the node after initializing it.
+ Does not start the node after initializing it.
-Streaming replication can be enabled on this node by passing the keyword
-parameter has_streaming => 1. This is disabled by default.
+ Streaming replication can be enabled on this node by passing the keyword
+ parameter has_streaming => 1. This is disabled by default.
-Restoring WAL segments from archives using restore_command can be enabled
-by passing the keyword parameter has_restoring => 1. This is disabled by
-default.
+ Restoring WAL segments from archives using restore_command can be enabled
+ by passing the keyword parameter has_restoring => 1. This is disabled by
+ default.
-The backup is copied, leaving the original unmodified. pg_hba.conf is
-unconditionally set to enable replication connections.
+ The backup is copied, leaving the original unmodified. pg_hba.conf is
+ unconditionally set to enable replication connections.
-=cut
+ =cut
-sub init_from_backup
+ sub init_from_backup
{
my ($self, $root_node, $backup_name, %params) = @_;
my $backup_path = $root_node->backup_dir . '/' . $backup_name;
@@ -716,17 +716,17 @@ sub start
=pod
-=item $node->stop(mode)
+ =item $node->stop(mode)
-Stop the node using pg_ctl -m $mode and wait for it to stop.
+ Stop the node using pg_ctl -m $mode and wait for it to stop.
-Note: if the node is already known stopped, this does nothing.
-However, if we think it's running and it's not, it's important for
-this to fail. Otherwise, tests might fail to detect server crashes.
+ Note: if the node is already known stopped, this does nothing.
+ However, if we think it's running and it's not, it's important for
+ this to fail. Otherwise, tests might fail to detect server crashes.
-=cut
+ =cut
-sub stop
+ sub stop
{
my ($self, $mode) = @_;
my $port = $self->port;
@@ -742,13 +742,13 @@ sub stop
=pod
-=item $node->reload()
+ =item $node->reload()
-Reload configuration parameters on the node.
+ Reload configuration parameters on the node.
-=cut
+ =cut
-sub reload
+ sub reload
{
my ($self) = @_;
my $port = $self->port;
@@ -761,13 +761,13 @@ sub reload
=pod
-=item $node->restart()
+ =item $node->restart()
-Wrapper for pg_ctl restart
+ Wrapper for pg_ctl restart
-=cut
+ =cut
-sub restart
+ sub restart
{
my ($self) = @_;
my $port = $self->port;
@@ -783,13 +783,13 @@ sub restart
=pod
-=item $node->promote()
+ =item $node->promote()
-Wrapper for pg_ctl promote
+ Wrapper for pg_ctl promote
-=cut
+ =cut
-sub promote
+ sub promote
{
my ($self) = @_;
my $port = $self->port;
@@ -804,13 +804,13 @@ sub promote
=pod
-=item $node->logrotate()
+ =item $node->logrotate()
-Wrapper for pg_ctl logrotate
+ Wrapper for pg_ctl logrotate
-=cut
+ =cut
-sub logrotate
+ sub logrotate
{
my ($self) = @_;
my $port = $self->port;
@@ -858,25 +858,25 @@ sub enable_restoring
my $copy_command =
$TestLib::windows_os
? qq{copy "$path\\\\%f" "%p"}
- : qq{cp "$path/%f" "%p"};
+: qq{cp "$path/%f" "%p"};
$self->append_conf(
'postgresql.conf', qq(
-restore_command = '$copy_command'
-));
- $self->set_standby_mode();
- return;
+ restore_command = '$copy_command'
+ ));
+$self->set_standby_mode();
+return;
}
=pod
-=item $node->set_standby_mode()
+ =item $node->set_standby_mode()
-Place standby.signal file.
+ Place standby.signal file.
-=cut
+ =cut
-sub set_standby_mode
+ sub set_standby_mode
{
my ($self) = @_;
@@ -903,15 +903,15 @@ sub enable_archiving
my $copy_command =
$TestLib::windows_os
? qq{copy "%p" "$path\\\\%f"}
- : qq{cp "%p" "$path/%f"};
+: qq{cp "%p" "$path/%f"};
# Enable archive_mode and archive_command on node
$self->append_conf(
'postgresql.conf', qq(
-archive_mode = on
-archive_command = '$copy_command'
-));
- return;
+ archive_mode = on
+ archive_command = '$copy_command'
+ ));
+return;
}
# Internal method
@@ -943,21 +943,21 @@ sub _update_pid
=pod
-=item PostgresNode->get_new_node(node_name)
+ =item PostgresNode->get_new_node(node_name)
-Build a new object of class C<PostgresNode> (or of a subclass, if you have
-one), assigning a free port number. Remembers the node, to prevent its port
-number from being reused for another node, and to ensure that it gets
-shut down when the test script exits.
+ Build a new object of class C<PostgresNode> (or of a subclass, if you have
+ one), assigning a free port number. Remembers the node, to prevent its port
+ number from being reused for another node, and to ensure that it gets
+ shut down when the test script exits.
-You should generally use this instead of C<PostgresNode::new(...)>.
+ You should generally use this instead of C<PostgresNode::new(...)>.
-For backwards compatibility, it is also exported as a standalone function,
-which can only create objects of class C<PostgresNode>.
+ For backwards compatibility, it is also exported as a standalone function,
+ which can only create objects of class C<PostgresNode>.
-=cut
+ =cut
-sub get_new_node
+ sub get_new_node
{
my $class = 'PostgresNode';
$class = shift if 1 < scalar @_;
@@ -1057,13 +1057,13 @@ END
=pod
-=item $node->teardown_node()
+ =item $node->teardown_node()
-Do an immediate stop of the node
+ Do an immediate stop of the node
-=cut
+ =cut
-sub teardown_node
+ sub teardown_node
{
my $self = shift;
@@ -1073,13 +1073,13 @@ sub teardown_node
=pod
-=item $node->clean_node()
+ =item $node->clean_node()
-Remove the base directory of the node if the node has been stopped.
+ Remove the base directory of the node if the node has been stopped.
-=cut
+ =cut
-sub clean_node
+ sub clean_node
{
my $self = shift;
@@ -1089,17 +1089,17 @@ sub clean_node
=pod
-=item $node->safe_psql($dbname, $sql) => stdout
+ =item $node->safe_psql($dbname, $sql) => stdout
-Invoke B<psql> to run B<sql> on B<dbname> and return its stdout on success.
-Die if the SQL produces an error. Runs with B<ON_ERROR_STOP> set.
+ Invoke B<psql> to run B<sql> on B<dbname> and return its stdout on success.
+ Die if the SQL produces an error. Runs with B<ON_ERROR_STOP> set.
-Takes optional extra params like timeout and timed_out parameters with the same
-options as psql.
+ Takes optional extra params like timeout and timed_out parameters with the same
+ options as psql.
-=cut
+ =cut
-sub safe_psql
+ sub safe_psql
{
my ($self, $dbname, $sql, %params) = @_;
@@ -1176,36 +1176,36 @@ instead die with an informative message.
Set a timeout for the psql call as an interval accepted by B<IPC::Run::timer>
(integer seconds is fine). This method raises an exception on timeout, unless
-the B<timed_out> parameter is also given.
+ the B<timed_out> parameter is also given.
-=item timed_out => \$timed_out
+ =item timed_out => \$timed_out
-If B<timeout> is set and this parameter is given, the scalar it references
-is set to true if the psql call times out.
+ If B<timeout> is set and this parameter is given, the scalar it references
+ is set to true if the psql call times out.
-=item extra_params => ['--single-transaction']
+ =item extra_params => ['--single-transaction']
-If given, it must be an array reference containing additional parameters to B<psql>.
+ If given, it must be an array reference containing additional parameters to B<psql>.
-=back
+ =back
-e.g.
+ e.g.
- my ($stdout, $stderr, $timed_out);
- my $cmdret = $node->psql('postgres', 'SELECT pg_sleep(60)',
- stdout => \$stdout, stderr => \$stderr,
- timeout => 30, timed_out => \$timed_out,
- extra_params => ['--single-transaction'])
+ my ($stdout, $stderr, $timed_out);
+my $cmdret = $node->psql('postgres', 'SELECT pg_sleep(60)',
+ stdout => \$stdout, stderr => \$stderr,
+ timeout => 30, timed_out => \$timed_out,
+ extra_params => ['--single-transaction'])
-will set $cmdret to undef and $timed_out to a true value.
+ will set $cmdret to undef and $timed_out to a true value.
- $node->psql('postgres', $sql, on_error_die => 1);
+ $node->psql('postgres', $sql, on_error_die => 1);
dies with an informative message if $sql fails.
-=cut
+ =cut
-sub psql
+ sub psql
{
my ($self, $dbname, $sql, %params) = @_;
@@ -1214,7 +1214,7 @@ sub psql
my $timeout = undef;
my $timeout_exception = 'psql timed out';
my @psql_params =
- ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
+ ('psql', '-XAtq', '-d', $self->connstr($dbname), '-f', '-');
# If the caller wants an array and hasn't passed stdout/stderr
# references, allocate temporary ones to capture them so we
@@ -1238,7 +1238,7 @@ sub psql
push @psql_params, '-v', 'ON_ERROR_STOP=1' if $params{on_error_stop};
push @psql_params, @{ $params{extra_params} }
- if defined $params{extra_params};
+ if defined $params{extra_params};
$timeout =
IPC::Run::timeout($params{timeout}, exception => $timeout_exception)
@@ -1278,7 +1278,7 @@ sub psql
# timeout, which we'll handle by testing is_expired
die $exc_save
if (blessed($exc_save)
- || $exc_save !~ /^\Q$timeout_exception\E/);
+ || $exc_save !~ /^\Q$timeout_exception\E/);
$ret = undef;
@@ -1301,76 +1301,76 @@ sub psql
{
chomp $$stdout;
$$stdout =~ s/\r//g if $TestLib::windows_os;
- }
+ }
- if (defined $$stderr)
- {
- chomp $$stderr;
- $$stderr =~ s/\r//g if $TestLib::windows_os;
+ if (defined $$stderr)
+ {
+ chomp $$stderr;
+ $$stderr =~ s/\r//g if $TestLib::windows_os;
}
# See http://perldoc.perl.org/perlvar.html#%24CHILD_ERROR
- # We don't use IPC::Run::Simple to limit dependencies.
- #
- # We always die on signal.
- my $core = $ret & 128 ? " (core dumped)" : "";
- die "psql exited with signal "
- . ($ret & 127)
- . "$core: '$$stderr' while running '@psql_params'"
- if $ret & 127;
- $ret = $ret >> 8;
-
- if ($ret && $params{on_error_die})
- {
- die "psql error: stderr: '$$stderr'\nwhile running '@psql_params'"
- if $ret == 1;
- die "connection error: '$$stderr'\nwhile running '@psql_params'"
- if $ret == 2;
- die
- "error running SQL: '$$stderr'\nwhile running '@psql_params' with sql '$sql'"
- if $ret == 3;
- die "psql returns $ret: '$$stderr'\nwhile running '@psql_params'";
- }
-
- if (wantarray)
- {
- return ($ret, $$stdout, $$stderr);
- }
- else
- {
- return $ret;
- }
-}
+ # We don't use IPC::Run::Simple to limit dependencies.
+ #
+ # We always die on signal.
+ my $core = $ret & 128 ? " (core dumped)" : "";
+ die "psql exited with signal "
+ . ($ret & 127)
+ . "$core: '$$stderr' while running '@psql_params'"
+ if $ret & 127;
+ $ret = $ret >> 8;
+
+ if ($ret && $params{on_error_die})
+ {
+ die "psql error: stderr: '$$stderr'\nwhile running '@psql_params'"
+ if $ret == 1;
+ die "connection error: '$$stderr'\nwhile running '@psql_params'"
+ if $ret == 2;
+ die
+ "error running SQL: '$$stderr'\nwhile running '@psql_params' with sql '$sql'"
+ if $ret == 3;
+ die "psql returns $ret: '$$stderr'\nwhile running '@psql_params'";
+ }
+
+ if (wantarray)
+ {
+ return ($ret, $$stdout, $$stderr);
+ }
+ else
+ {
+ return $ret;
+ }
+ }
-=pod
+ =pod
-=item $node->poll_query_until($dbname, $query [, $expected ])
+ =item $node->poll_query_until($dbname, $query [, $expected ])
-Run B<$query> repeatedly, until it returns the B<$expected> result
-('t', or SQL boolean true, by default).
-Continues polling if B<psql> returns an error result.
-Times out after 180 seconds.
-Returns 1 if successful, 0 if timed out.
+ Run B<$query> repeatedly, until it returns the B<$expected> result
+ ('t', or SQL boolean true, by default).
+ Continues polling if B<psql> returns an error result.
+ Times out after 180 seconds.
+ Returns 1 if successful, 0 if timed out.
-=cut
+ =cut
-sub poll_query_until
-{
- my ($self, $dbname, $query, $expected) = @_;
+ sub poll_query_until
+ {
+ my ($self, $dbname, $query, $expected) = @_;
- $expected = 't' unless defined($expected); # default value
+ $expected = 't' unless defined($expected); # default value
- my $cmd = [ 'psql', '-XAt', '-c', $query, '-d', $self->connstr($dbname) ];
- my ($stdout, $stderr);
- my $max_attempts = 180 * 10;
- my $attempts = 0;
+ my $cmd = [ 'psql', '-XAt', '-c', $query, '-d', $self->connstr($dbname) ];
+ my ($stdout, $stderr);
+ my $max_attempts = 180 * 10;
+ my $attempts = 0;
- while ($attempts < $max_attempts)
- {
- my $result = IPC::Run::run $cmd, '>', \$stdout, '2>', \$stderr;
+ while ($attempts < $max_attempts)
+ {
+ my $result = IPC::Run::run $cmd, '>', \$stdout, '2>', \$stderr;
- chomp($stdout);
- $stdout =~ s/\r//g if $TestLib::windows_os;
+ chomp($stdout);
+ $stdout =~ s/\r//g if $TestLib::windows_os;
if ($stdout eq $expected)
{
@@ -1387,7 +1387,7 @@ sub poll_query_until
# output from the last attempt, hopefully that's useful for debugging.
chomp($stderr);
$stderr =~ s/\r//g if $TestLib::windows_os;
- diag qq(poll_query_until timed out executing this query:
+ diag qq(poll_query_until timed out executing this query:
$query
expecting this output:
$expected
@@ -1422,13 +1422,13 @@ sub command_ok
=pod
-=item $node->command_fails(...)
+ =item $node->command_fails(...)
-TestLib::command_fails with our PGPORT. See command_ok(...)
+ TestLib::command_fails with our PGPORT. See command_ok(...)
-=cut
+ =cut
-sub command_fails
+ sub command_fails
{
local $Test::Builder::Level = $Test::Builder::Level + 1;
@@ -1442,13 +1442,13 @@ sub command_fails
=pod
-=item $node->command_like(...)
+ =item $node->command_like(...)
-TestLib::command_like with our PGPORT. See command_ok(...)
+ TestLib::command_like with our PGPORT. See command_ok(...)
-=cut
+ =cut
-sub command_like
+ sub command_like
{
local $Test::Builder::Level = $Test::Builder::Level + 1;
@@ -1462,13 +1462,13 @@ sub command_like
=pod
-=item $node->command_checks_all(...)
+ =item $node->command_checks_all(...)
-TestLib::command_checks_all with our PGPORT. See command_ok(...)
+ TestLib::command_checks_all with our PGPORT. See command_ok(...)
-=cut
+ =cut
-sub command_checks_all
+ sub command_checks_all
{
local $Test::Builder::Level = $Test::Builder::Level + 1;
@@ -1482,17 +1482,17 @@ sub command_checks_all
=pod
-=item $node->issues_sql_like(cmd, expected_sql, test_name)
+ =item $node->issues_sql_like(cmd, expected_sql, test_name)
-Run a command on the node, then verify that $expected_sql appears in the
-server log file.
+ Run a command on the node, then verify that $expected_sql appears in the
+ server log file.
-Reads the whole log file so be careful when working with large log outputs.
-The log file is truncated prior to running the command, however.
+ Reads the whole log file so be careful when working with large log outputs.
+ The log file is truncated prior to running the command, however.
-=cut
+ =cut
-sub issues_sql_like
+ sub issues_sql_like
{
local $Test::Builder::Level = $Test::Builder::Level + 1;
@@ -1510,14 +1510,14 @@ sub issues_sql_like
=pod
-=item $node->run_log(...)
+ =item $node->run_log(...)
-Runs a shell command like TestLib::run_log, but with PGPORT set so
-that the command will default to connecting to this PostgresNode.
+ Runs a shell command like TestLib::run_log, but with PGPORT set so
+ that the command will default to connecting to this PostgresNode.
-=cut
+ =cut
-sub run_log
+ sub run_log
{
my $self = shift;
@@ -1529,21 +1529,21 @@ sub run_log
=pod
-=item $node->lsn(mode)
+ =item $node->lsn(mode)
-Look up WAL locations on the server:
+ Look up WAL locations on the server:
- * insert location (master only, error on replica)
- * write location (master only, error on replica)
- * flush location (master only, error on replica)
- * receive location (always undef on master)
- * replay location (always undef on master)
+ * insert location (master only, error on replica)
+ * write location (master only, error on replica)
+ * flush location (master only, error on replica)
+ * receive location (always undef on master)
+ * replay location (always undef on master)
-mode must be specified.
+ mode must be specified.
-=cut
+ =cut
-sub lsn
+ sub lsn
{
my ($self, $mode) = @_;
my %modes = (
@@ -1572,34 +1572,34 @@ sub lsn
=pod
-=item $node->wait_for_catchup(standby_name, mode, target_lsn)
+ =item $node->wait_for_catchup(standby_name, mode, target_lsn)
-Wait for the node with application_name standby_name (usually from node->name,
-also works for logical subscriptions)
-until its replication location in pg_stat_replication equals or passes the
-upstream's WAL insert point at the time this function is called. By default
-the replay_lsn is waited for, but 'mode' may be specified to wait for any of
-sent|write|flush|replay. The connection catching up must be in a streaming
-state.
+ Wait for the node with application_name standby_name (usually from node->name,
+ also works for logical subscriptions)
+ until its replication location in pg_stat_replication equals or passes the
+ upstream's WAL insert point at the time this function is called. By default
+ the replay_lsn is waited for, but 'mode' may be specified to wait for any of
+ sent|write|flush|replay. The connection catching up must be in a streaming
+ state.
-If there is no active replication connection from this peer, waits until
-poll_query_until timeout.
+ If there is no active replication connection from this peer, waits until
+ poll_query_until timeout.
-Requires that the 'postgres' db exists and is accessible.
+ Requires that the 'postgres' db exists and is accessible.
-target_lsn may be any arbitrary lsn, but is typically $master_node->lsn('insert').
-If omitted, pg_current_wal_lsn() is used.
+ target_lsn may be any arbitrary lsn, but is typically $master_node->lsn('insert').
+ If omitted, pg_current_wal_lsn() is used.
-This is not a test. It die()s on failure.
+ This is not a test. It die()s on failure.
-=cut
+ =cut
-sub wait_for_catchup
+ sub wait_for_catchup
{
my ($self, $standby_name, $mode, $target_lsn) = @_;
$mode = defined($mode) ? $mode : 'replay';
my %valid_modes =
- ('sent' => 1, 'write' => 1, 'flush' => 1, 'replay' => 1);
+ ('sent' => 1, 'write' => 1, 'flush' => 1, 'replay' => 1);
croak "unknown mode $mode for 'wait_for_catchup', valid modes are "
. join(', ', keys(%valid_modes))
unless exists($valid_modes{$mode});
@@ -1677,27 +1677,27 @@ sub wait_for_slot_catchup
=pod
-=item $node->query_hash($dbname, $query, @columns)
+ =item $node->query_hash($dbname, $query, @columns)
-Execute $query on $dbname, replacing any appearance of the string __COLUMNS__
-within the query with a comma-separated list of @columns.
+ Execute $query on $dbname, replacing any appearance of the string __COLUMNS__
+ within the query with a comma-separated list of @columns.
-If __COLUMNS__ does not appear in the query, its result columns must EXACTLY
-match the order and number (but not necessarily alias) of supplied @columns.
+ If __COLUMNS__ does not appear in the query, its result columns must EXACTLY
+ match the order and number (but not necessarily alias) of supplied @columns.
-The query must return zero or one rows.
+ The query must return zero or one rows.
-Return a hash-ref representation of the results of the query, with any empty
-or null results as defined keys with an empty-string value. There is no way
-to differentiate between null and empty-string result fields.
+ Return a hash-ref representation of the results of the query, with any empty
+ or null results as defined keys with an empty-string value. There is no way
+ to differentiate between null and empty-string result fields.
-If the query returns zero rows, return a hash with all columns empty. There
-is no way to differentiate between zero rows returned and a row with only
-null columns.
+ If the query returns zero rows, return a hash with all columns empty. There
+ is no way to differentiate between zero rows returned and a row with only
+ null columns.
-=cut
+ =cut
-sub query_hash
+ sub query_hash
{
my ($self, $dbname, $query, @columns) = @_;
croak 'calls in array context for multi-row results not supported yet'
@@ -1722,20 +1722,20 @@ sub query_hash
=pod
-=item $node->slot(slot_name)
+ =item $node->slot(slot_name)
-Return hash-ref of replication slot data for the named slot, or a hash-ref with
-all values '' if not found. Does not differentiate between null and empty string
-for fields, no field is ever undef.
+ Return hash-ref of replication slot data for the named slot, or a hash-ref with
+ all values '' if not found. Does not differentiate between null and empty string
+ for fields, no field is ever undef.
-The restart_lsn and confirmed_flush_lsn fields are returned verbatim, and also
-as a 2-list of [highword, lowword] integer. Since we rely on Perl 5.8.8 we can't
-"use bigint", it's from 5.20, and we can't assume we have Math::Bigint from CPAN
-either.
+ The restart_lsn and confirmed_flush_lsn fields are returned verbatim, and also
+ as a 2-list of [highword, lowword] integer. Since we rely on Perl 5.8.8 we can't
+ "use bigint", it's from 5.20, and we can't assume we have Math::Bigint from CPAN
+ either.
-=cut
+ =cut
-sub slot
+ sub slot
{
my ($self, $slot_name) = @_;
my @columns = (
@@ -1750,24 +1750,24 @@ sub slot
=pod
-=item $node->pg_recvlogical_upto(self, dbname, slot_name, endpos, timeout_secs, ...)
+ =item $node->pg_recvlogical_upto(self, dbname, slot_name, endpos, timeout_secs, ...)
-Invoke pg_recvlogical to read from slot_name on dbname until LSN endpos, which
-corresponds to pg_recvlogical --endpos. Gives up after timeout (if nonzero).
+ Invoke pg_recvlogical to read from slot_name on dbname until LSN endpos, which
+ corresponds to pg_recvlogical --endpos. Gives up after timeout (if nonzero).
-Disallows pg_recvlogical from internally retrying on error by passing --no-loop.
+ Disallows pg_recvlogical from internally retrying on error by passing --no-loop.
-Plugin options are passed as additional keyword arguments.
+ Plugin options are passed as additional keyword arguments.
-If called in scalar context, returns stdout, and die()s on timeout or nonzero return.
+ If called in scalar context, returns stdout, and die()s on timeout or nonzero return.
-If called in array context, returns a tuple of (retval, stdout, stderr, timeout).
-timeout is the IPC::Run::Timeout object whose is_expired method can be tested
-to check for timeout. retval is undef on timeout.
+ If called in array context, returns a tuple of (retval, stdout, stderr, timeout).
+ timeout is the IPC::Run::Timeout object whose is_expired method can be tested
+ to check for timeout. retval is undef on timeout.
-=cut
+ =cut
-sub pg_recvlogical_upto
+ sub pg_recvlogical_upto
{
my ($self, $dbname, $slot_name, $endpos, $timeout_secs, %plugin_options)
= @_;
@@ -1825,7 +1825,7 @@ sub pg_recvlogical_upto
};
$stdout =~ s/\r//g if $TestLib::windows_os;
- $stderr =~ s/\r//g if $TestLib::windows_os;
+ $stderr =~ s/\r//g if $TestLib::windows_os;
if (wantarray)
{
diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm
index baf5d0a..6304d1a 100644
--- a/src/test/perl/RecursiveCopy.pm
+++ b/src/test/perl/RecursiveCopy.pm
@@ -1,13 +1,13 @@
=pod
-=head1 NAME
+ =head1 NAME
-RecursiveCopy - simple recursive copy implementation
+ RecursiveCopy - simple recursive copy implementation
-=head1 SYNOPSIS
+ =head1 SYNOPSIS
-use RecursiveCopy;
+ use RecursiveCopy;
RecursiveCopy::copypath($from, $to, filterfn => sub { return 1; });
RecursiveCopy::copypath($from, $to);
@@ -25,40 +25,40 @@ use File::Copy;
=pod
-=head1 DESCRIPTION
+ =head1 DESCRIPTION
-=head2 copypath($from, $to, %params)
+ =head2 copypath($from, $to, %params)
-Recursively copy all files and directories from $from to $to.
-Does not preserve file metadata (e.g., permissions).
+ Recursively copy all files and directories from $from to $to.
+ Does not preserve file metadata (e.g., permissions).
-Only regular files and subdirectories are copied. Trying to copy other types
-of directory entries raises an exception.
+ Only regular files and subdirectories are copied. Trying to copy other types
+ of directory entries raises an exception.
-Raises an exception if a file would be overwritten, the source directory can't
-be read, or any I/O operation fails. However, we silently ignore ENOENT on
-open, because when copying from a live database it's possible for a file/dir
-to be deleted after we see its directory entry but before we can open it.
+ Raises an exception if a file would be overwritten, the source directory can't
+ be read, or any I/O operation fails. However, we silently ignore ENOENT on
+ open, because when copying from a live database it's possible for a file/dir
+ to be deleted after we see its directory entry but before we can open it.
-Always returns true.
+ Always returns true.
-If the B<filterfn> parameter is given, it must be a subroutine reference.
-This subroutine will be called for each entry in the source directory with its
-relative path as only parameter; if the subroutine returns true the entry is
-copied, otherwise the file is skipped.
+ If the B<filterfn> parameter is given, it must be a subroutine reference.
+ This subroutine will be called for each entry in the source directory with its
+ relative path as only parameter; if the subroutine returns true the entry is
+ copied, otherwise the file is skipped.
-On failure the target directory may be in some incomplete state; no cleanup is
-attempted.
+ On failure the target directory may be in some incomplete state; no cleanup is
+ attempted.
-=head1 EXAMPLES
+ =head1 EXAMPLES
- RecursiveCopy::copypath('/some/path', '/empty/dir',
- filterfn => sub {
- # omit log/ and contents
- my $src = shift;
- return $src ne 'log';
- }
- );
+ RecursiveCopy::copypath('/some/path', '/empty/dir',
+ filterfn => sub {
+ # omit log/ and contents
+ my $src = shift;
+return $src ne 'log';
+}
+);
=cut
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 92861c8..db263c9 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -364,7 +364,7 @@ sub check_pg_config
my ($regexp) = @_;
my ($stdout, $stderr);
my $result = IPC::Run::run [ 'pg_config', '--includedir' ], '>',
- \$stdout, '2>', \$stderr
+ \$stdout, '2>', \$stderr
or die "could not execute pg_config";
chomp($stdout);
@@ -411,9 +411,9 @@ sub command_exit_is
# long as the process was not terminated by an exception. To work around
# that, use $h->full_result on Windows instead.
my $result =
- ($Config{osname} eq "MSWin32")
- ? ($h->full_results)[0]
- : $h->result(0);
+ ($Config{osname} eq "MSWin32")
+ ? ($h->full_results)[0]
+ : $h->result(0);
is($result, $expected, $test_name);
return;
}
@@ -425,7 +425,7 @@ sub program_help_ok
my ($stdout, $stderr);
print("# Running: $cmd --help\n");
my $result = IPC::Run::run [ $cmd, '--help' ], '>', \$stdout, '2>',
- \$stderr;
+ \$stderr;
ok($result, "$cmd --help exit code 0");
isnt($stdout, '', "$cmd --help goes to stdout");
is($stderr, '', "$cmd --help nothing to stderr");
@@ -439,7 +439,7 @@ sub program_version_ok
my ($stdout, $stderr);
print("# Running: $cmd --version\n");
my $result = IPC::Run::run [ $cmd, '--version' ], '>', \$stdout, '2>',
- \$stderr;
+ \$stderr;
ok($result, "$cmd --version exit code 0");
isnt($stdout, '', "$cmd --version goes to stdout");
is($stderr, '', "$cmd --version nothing to stderr");
@@ -453,8 +453,8 @@ sub program_options_handling_ok
my ($stdout, $stderr);
print("# Running: $cmd --not-a-valid-option\n");
my $result = IPC::Run::run [ $cmd, '--not-a-valid-option' ], '>',
- \$stdout,
- '2>', \$stderr;
+ \$stdout,
+ '2>', \$stderr;
ok(!$result, "$cmd with invalid option nonzero exit code");
isnt($stderr, '', "$cmd with invalid option prints error message");
return;
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index beb4555..ef06bfb 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -92,7 +92,7 @@ sub test_target_session_attrs
# point does.
my ($ret, $stdout, $stderr) =
$node1->psql('postgres', 'SHOW port;',
- extra_params => [ '-d', $connstr ]);
+ extra_params => [ '-d', $connstr ]);
is( $status == $ret && $stdout eq $target_node->port,
1,
"connect to node $target_name if mode \"$mode\" and $node1_name,$node2_name listed"
diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl
index c23cc4d..2a6d902 100644
--- a/src/test/recovery/t/006_logical_decoding.pl
+++ b/src/test/recovery/t/006_logical_decoding.pl
@@ -105,7 +105,7 @@ $node_master->safe_psql('otherdb',
);
# make sure you can't drop a slot while active
-SKIP:
+ SKIP:
{
# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.
diff --git a/src/test/recovery/t/015_promotion_pages.pl b/src/test/recovery/t/015_promotion_pages.pl
index 6fb70b5..73461b4 100644
--- a/src/test/recovery/t/015_promotion_pages.pl
+++ b/src/test/recovery/t/015_promotion_pages.pl
@@ -15,10 +15,10 @@ $alpha->init(allows_streaming => 1);
# references.
$alpha->append_conf("postgresql.conf", <<EOF);
wal_log_hints = off
-EOF
+ EOF
-# Start the primary
-$alpha->start;
+ # Start the primary
+ $alpha->start;
# setup/start a standby
$alpha->backup('bkp');
@@ -26,9 +26,9 @@ my $bravo = get_new_node('bravo');
$bravo->init_from_backup($alpha, 'bkp', has_streaming => 1);
$bravo->append_conf('postgresql.conf', <<EOF);
checkpoint_timeout=1h
-checkpoint_completion_target=0.9
-EOF
-$bravo->start;
+ checkpoint_completion_target=0.9
+ EOF
+ $bravo->start;
# Dummy table for the upcoming tests.
$alpha->safe_psql('postgres', 'create table test1 (a int)');
diff --git a/src/test/subscription/t/002_types.pl b/src/test/subscription/t/002_types.pl
index 02f76d4..1a80188 100644
--- a/src/test/subscription/t/002_types.pl
+++ b/src/test/subscription/t/002_types.pl
@@ -284,40 +284,40 @@ is( $result, '1|{1,2,3}
{4,1,2}|{d,a,b}|{4.4,1.1,2.2}|{"4 years","1 year","2 years"}
{5,NULL,NULL}|{e,NULL,b}|{5.5,1.1,NULL}|{"5 years",NULL,NULL}
1|a
-2|b
-3|c
-4|d
-5|
-a|{b,c}
+ 2|b
+ 3|c
+ 4|d
+ 5|
+ a|{b,c}
b|{c,a}
c|{b,a}
d|{c,b}
e|{d,NULL}
1|(1,a,1)
-2|(2,b,2)
-3|(3,c,3)
-4|(4,d,4)
-5|(,,5)
+ 2|(2,b,2)
+ 3|(3,c,3)
+ 4|(4,d,4)
+ 5|(,,5)
(1,a,1)|{"(1,a,1)"}
(2,b,2)|{"(2,b,2)"}
(3,c,3)|{"(3,c,3)"}
(4,d,4)|{"(4,d,3)"}
(5,e,)|{NULL,"(5,,5)"}
1|(1,a,1)
-2|(2,b,2)
-3|(3,c,3)
-4|(4,d,4)
-5|(,e,)
+ 2|(2,b,2)
+ 3|(3,c,3)
+ 4|(4,d,4)
+ 5|(,e,)
(1,a,1)|{"(1,a,1)"}
(2,b,2)|{"(2,b,2)"}
(3,c,3)|{"(3,c,3)"}
(4,d,3)|{"(3,d,3)"}
(5,e,3)|{"(3,e,3)",NULL}
1|(1,"{a,b,c}",1)
-2|(2,"{a,b,c}",2)
-3|(3,"{a,b,c}",3)
-4|(4,"{c,b,d}",4)
-5|(5,"{NULL,e,NULL}",5)
+ 2|(2,"{a,b,c}",2)
+ 3|(3,"{a,b,c}",3)
+ 4|(4,"{c,b,d}",4)
+ 5|(5,"{NULL,e,NULL}",5)
(1,"{a,b,c}",1)|{"(1,\"{a,b,c}\",1)"}
(2,"{b,c,a}",2)|{"(2,\"{b,c,a}\",1)"}
(3,"{c,a,b}",1)|{"(3,\"{c,a,b}\",1)"}
@@ -325,19 +325,19 @@ e|{d,NULL}
(5,"{c,NULL,b}",)|{"(5,\"{c,e,b}\",1)"}
("(1,a,1)","{""(1,a,1)"",""(2,b,2)""}",a,"{a,b,NULL,c}")|{"(\"(1,a,1)\",\"{\"\"(1,a,1)\"\",\"\"(2,b,2)\"\",NULL}\",a,\"{a,b,c}\")"}
1|[1,11)
-2|[2,21)
-3|[3,31)
-4|[4,41)
-5|[5,51)
-1|["2014-08-04 00:00:00+02",infinity)|{"[1,3)","[10,21)"}
+ 2|[2,21)
+ 3|[3,31)
+ 4|[4,41)
+ 5|[5,51)
+ 1|["2014-08-04 00:00:00+02",infinity)|{"[1,3)","[10,21)"}
2|["2014-08-02 00:00:00+02","2014-08-04 00:00:00+02")|{"[2,4)","[20,31)"}
3|["2014-08-01 00:00:00+02","2014-08-04 00:00:00+02")|{"[3,5)"}
4|["2014-07-31 00:00:00+02","2014-08-04 00:00:00+02")|{"[4,6)",NULL,"[40,51)"}
5||
-1|"a"=>"1"
-2|"zzz"=>"foo"
-3|"123"=>"321"
-4|"yellow horse"=>"moaned"',
+ 1|"a"=>"1"
+ 2|"zzz"=>"foo"
+ 3|"123"=>"321"
+ 4|"yellow horse"=>"moaned"',
'check replicated inserts on subscriber');
# Run batch of updates
@@ -405,40 +405,40 @@ is( $result, '1|{4,5,6}
{4,1,2}|{c,d,e}|{3,4,5}|{"3 days 00:00:01","4 days 00:00:02","5 days 00:00:03"}
{5,NULL,NULL}|{c,d,e}|{3,4,5}|{"3 days 00:00:01","4 days 00:00:02","5 days 00:00:03"}
1|c
-2|b
-3|c
-4|
-5|
-a|{e,NULL}
+ 2|b
+ 3|c
+ 4|
+ 5|
+ a|{e,NULL}
b|{c,a}
c|{b,a}
d|{e,d}
e|{e,d}
1|(1,A,1)
-2|(2,b,2)
-3|(3,c,3)
-4|(,x,-1)
-5|(,x,-1)
+ 2|(2,b,2)
+ 3|(3,c,3)
+ 4|(,x,-1)
+ 5|(,x,-1)
(1,a,1)|{"(9,x,-1)"}
(2,b,2)|{"(2,b,2)"}
(3,c,3)|{"(3,c,3)"}
(4,d,4)|{NULL,"(9,x,)"}
(5,e,)|{NULL,"(9,x,)"}
1|(1,,)
-2|(2,b,2)
-3|(3,c,3)
-4|(4,d,44)
-5|(4,d,44)
+ 2|(2,b,2)
+ 3|(3,c,3)
+ 4|(4,d,44)
+ 5|(4,d,44)
(1,a,1)|{NULL,"(3,d,3)"}
(2,b,2)|{"(2,b,2)"}
(3,c,3)|{"(3,c,3)"}
(4,d,3)|{"(1,a,1)","(2,b,2)"}
(5,e,3)|{"(1,a,1)","(2,b,2)"}
1|(1,"{a,e,c}",)
-2|(2,"{a,b,c}",2)
-3|(3,"{a,b,c}",3)
-4|(4,"{c,b,d}",4)
-5|(4,"{c,b,d}",4)
+ 2|(2,"{a,b,c}",2)
+ 3|(3,"{a,b,c}",3)
+ 4|(4,"{c,b,d}",4)
+ 5|(4,"{c,b,d}",4)
(1,"{a,b,c}",1)|{NULL,"(1,\"{a,b,c}\",1)","(,\"{a,e,c}\",2)"}
(2,"{b,c,a}",2)|{"(2,\"{b,c,a}\",1)"}
(3,"{c,a,b}",1)|{"(3,\"{c,a,b}\",1)"}
@@ -446,19 +446,19 @@ e|{e,d}
(5,"{c,NULL,b}",)|{"(5,\"{a,b,c}\",5)"}
("(1,a,1)","{""(1,a,1)"",""(2,b,2)""}",a,"{a,b,NULL,c}")|{"(\"(1,a,1)\",\"{\"\"(1,a,1)\"\",\"\"(2,b,2)\"\",NULL}\",a,\"{a,b,c}\")",NULL}
1|[100,1001)
-2|[2,21)
-3|[3,31)
-4|[2,90)
-5|[2,90)
-1|["2014-08-04 00:00:00+02",infinity)|{"[100,1001)"}
+ 2|[2,21)
+ 3|[3,31)
+ 4|[2,90)
+ 5|[2,90)
+ 1|["2014-08-04 00:00:00+02",infinity)|{"[100,1001)"}
2|["2014-08-02 00:00:00+02","2014-08-04 00:00:00+02")|{"[2,4)","[20,31)"}
3|["2014-08-01 00:00:00+02","2014-08-04 00:00:00+02")|{"[3,5)"}
4|["2014-08-04 00:00:00+02",infinity)|{NULL,"[11,10000000)"}
5|["2014-08-04 00:00:00+02",infinity)|{NULL,"[11,10000000)"}
1|"updated"=>"value"
-2|"updated"=>"value"
-3|"also"=>"updated"
-4|"yellow horse"=>"moaned"',
+ 2|"updated"=>"value"
+ 3|"also"=>"updated"
+ 4|"yellow horse"=>"moaned"',
'check replicated updates on subscriber');
# Run batch of deletes
@@ -521,33 +521,33 @@ is( $result, '3|{3,2,1}
{4,1,2}|{c,d,e}|{3,4,5}|{"3 days 00:00:01","4 days 00:00:02","5 days 00:00:03"}
{5,NULL,NULL}|{c,d,e}|{3,4,5}|{"3 days 00:00:01","4 days 00:00:02","5 days 00:00:03"}
3|c
-4|
-5|
-b|{c,a}
+ 4|
+ 5|
+ b|{c,a}
d|{e,d}
e|{e,d}
3|(3,c,3)
-4|(,x,-1)
-5|(,x,-1)
+ 4|(,x,-1)
+ 5|(,x,-1)
(2,b,2)|{"(2,b,2)"}
(4,d,4)|{NULL,"(9,x,)"}
(5,e,)|{NULL,"(9,x,)"}
3|(3,c,3)
-4|(4,d,44)
-5|(4,d,44)
+ 4|(4,d,44)
+ 5|(4,d,44)
(2,b,2)|{"(2,b,2)"}
(4,d,3)|{"(1,a,1)","(2,b,2)"}
(5,e,3)|{"(1,a,1)","(2,b,2)"}
4|(4,"{c,b,d}",4)
-5|(4,"{c,b,d}",4)
+ 5|(4,"{c,b,d}",4)
(2,"{b,c,a}",2)|{"(2,\"{b,c,a}\",1)"}
(4,"{c,b,d}",4)|{"(5,\"{a,b,c}\",5)"}
(5,"{c,NULL,b}",)|{"(5,\"{a,b,c}\",5)"}
2|["2014-08-02 00:00:00+02","2014-08-04 00:00:00+02")|{"[2,4)","[20,31)"}
3|["2014-08-01 00:00:00+02","2014-08-04 00:00:00+02")|{"[3,5)"}
2|"updated"=>"value"
-3|"also"=>"updated"
-4|"yellow horse"=>"moaned"',
+ 3|"also"=>"updated"
+ 4|"yellow horse"=>"moaned"',
'check replicated deletes on subscriber');
# Test a domain with a constraint backed by a SQL-language function,
diff --git a/src/test/subscription/t/008_diff_schema.pl b/src/test/subscription/t/008_diff_schema.pl
index 22b76f1..ca5109e 100644
--- a/src/test/subscription/t/008_diff_schema.pl
+++ b/src/test/subscription/t/008_diff_schema.pl
@@ -46,7 +46,7 @@ $node_subscriber->poll_query_until('postgres', $synced_query)
my $result =
$node_subscriber->safe_psql('postgres',
- "SELECT count(*), count(c), count(d = 999) FROM test_tab");
+ "SELECT count(*), count(c), count(d = 999) FROM test_tab");
is($result, qq(2|2|2), 'check initial data was copied to subscriber');
# Update the rows on the publisher and check the additional columns on
@@ -57,7 +57,7 @@ $node_publisher->wait_for_catchup($appname);
$result =
$node_subscriber->safe_psql('postgres',
- "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab");
+ "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab");
is($result, qq(2|2|2|2),
'check extra columns contain local defaults after copy');
@@ -85,7 +85,7 @@ $node_publisher->wait_for_catchup($appname);
$result =
$node_subscriber->safe_psql('postgres',
- "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab");
+ "SELECT count(*), count(c), count(d = 999), count(e) FROM test_tab");
is($result, qq(3|3|3|3),
'check extra columns contain local defaults after apply');
diff --git a/src/tools/fix-old-flex-code.pl b/src/tools/fix-old-flex-code.pl
index 0e0b572..47ac65f 100644
--- a/src/tools/fix-old-flex-code.pl
+++ b/src/tools/fix-old-flex-code.pl
@@ -39,10 +39,10 @@ exit 0 if $1 >= 36;
$ccode =~
s|(struct yyguts_t \* yyg = \(struct yyguts_t\*\)yyscanner; /\* This var may be unused depending upon options. \*/
.*?)
- return yy_is_jam \? 0 : yy_current_state;
+ return yy_is_jam \? 0 : yy_current_state;
|$1
- (void) yyg;
- return yy_is_jam ? 0 : yy_current_state;
+(void) yyg;
+return yy_is_jam ? 0 : yy_current_state;
|s;
# Write the modified file back out.
@@ -56,11 +56,11 @@ exit 0;
sub usage
{
die <<EOM;
-Usage: fix-old-flex-code.pl c-file-name
+ Usage: fix-old-flex-code.pl c-file-name
-fix-old-flex-code.pl modifies a flex output file to suppress
-an unused-variable warning that occurs with older flex versions.
+ fix-old-flex-code.pl modifies a flex output file to suppress
+ an unused-variable warning that occurs with older flex versions.
-Report bugs to <pgsql-bugs\@postgresql.org>.
-EOM
+ Report bugs to <pgsql-bugs\@postgresql.org>.
+ EOM
}
diff --git a/src/tools/git_changelog b/src/tools/git_changelog
index 0a714e2..2e61236 100755
--- a/src/tools/git_changelog
+++ b/src/tools/git_changelog
@@ -403,7 +403,7 @@ sub output_details
sub usage
{
print STDERR <<EOM;
-Usage: git_changelog [--brief/-b] [--details-after/-d] [--master-only/-m] [--non-master-only/-n] [--oldest-first/-o] [--post-date/-p] [--since=SINCE]
+ Usage: git_changelog [--brief/-b] [--details-after/-d] [--master-only/-m] [--non-master-only/-n] [--oldest-first/-o] [--post-date/-p] [--since=SINCE]
--brief Shorten commit descriptions, omitting branch identification
--details-after Show branch and author info after the commit description
--master-only Show only commits made just in the master branch
@@ -411,6 +411,6 @@ Usage: git_changelog [--brief/-b] [--details-after/-d] [--master-only/-m] [--non
--oldest-first Show oldest commits first
--post-date Show branches made after a commit occurred
--since Show only commits dated since SINCE
-EOM
+ EOM
exit 1;
}
diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm
index dc128a7..3770924 100644
--- a/src/tools/msvc/Install.pm
+++ b/src/tools/msvc/Install.pm
@@ -256,7 +256,7 @@ sub CopySolutionOutput
next
if ($insttype eq "client" && !grep { $_ eq $pf }
- @client_program_files);
+ @client_program_files);
my $proj = read_file("$pf.$vcproj")
|| croak "Could not open $pf.$vcproj\n";
@@ -265,7 +265,7 @@ sub CopySolutionOutput
# SO_MAJOR_VERSION is defined in its Makefile, whose path
# can be found using the resource file of this project.
if (( $vcproj eq 'vcxproj'
- && $proj =~ qr{ResourceCompile\s*Include="([^"]+)"})
+ && $proj =~ qr{ResourceCompile\s*Include="([^"]+)"})
|| ( $vcproj eq 'vcproj'
&& $proj =~ qr{File\s*RelativePath="([^\"]+)\.rc"}))
{
@@ -405,7 +405,7 @@ sub GenerateTimezoneFiles
print "Generating timezone files...";
my @args =
- ("$conf/zic/zic", '-d', "$target/share/timezone", '-p', "$posixrules");
+ ("$conf/zic/zic", '-d', "$target/share/timezone", '-p', "$posixrules");
foreach (@tzfiles)
{
my $tzfile = $_;
@@ -639,7 +639,7 @@ sub ParseAndCleanRule
last if ($pcount < 0);
}
$flist =
- substr($flist, 0, index($flist, '$(addsuffix '))
+ substr($flist, 0, index($flist, '$(addsuffix '))
. substr($flist, $i + 1);
}
return $flist;
diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm
index 1492133..410962c 100644
--- a/src/tools/msvc/MSBuildProject.pm
+++ b/src/tools/msvc/MSBuildProject.pm
@@ -30,49 +30,49 @@ sub WriteHeader
my ($self, $f) = @_;
print $f <<EOF;
-<?xml version="1.0" encoding="Windows-1252"?>
-<Project DefaultTargets="Build" ToolsVersion="$self->{ToolsVersion}" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
- <ItemGroup Label="ProjectConfigurations">
-EOF
- $self->WriteConfigurationHeader($f, 'Debug');
+ <?xml version="1.0" encoding="Windows-1252"?>
+ <Project DefaultTargets="Build" ToolsVersion="$self->{ToolsVersion}" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
+ <ItemGroup Label="ProjectConfigurations">
+ EOF
+ $self->WriteConfigurationHeader($f, 'Debug');
$self->WriteConfigurationHeader($f, 'Release');
print $f <<EOF;
- </ItemGroup>
- <PropertyGroup Label="Globals">
- <ProjectGuid>$self->{guid}</ProjectGuid>
- </PropertyGroup>
- <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.Default.props" />
-EOF
- $self->WriteConfigurationPropertyGroup($f, 'Release',
- { wholeopt => 'false' });
+ </ItemGroup>
+ <PropertyGroup Label="Globals">
+ <ProjectGuid>$self->{guid}</ProjectGuid>
+ </PropertyGroup>
+ <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.Default.props" />
+ EOF
+ $self->WriteConfigurationPropertyGroup($f, 'Release',
+ { wholeopt => 'false' });
$self->WriteConfigurationPropertyGroup($f, 'Debug',
{ wholeopt => 'false' });
print $f <<EOF;
- <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.props" />
- <ImportGroup Label="ExtensionSettings">
- </ImportGroup>
-EOF
- $self->WritePropertySheetsPropertyGroup($f, 'Release');
+ <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.props" />
+ <ImportGroup Label="ExtensionSettings">
+ </ImportGroup>
+ EOF
+ $self->WritePropertySheetsPropertyGroup($f, 'Release');
$self->WritePropertySheetsPropertyGroup($f, 'Debug');
print $f <<EOF;
- <PropertyGroup Label="UserMacros" />
- <PropertyGroup>
- <_ProjectFileVersion>10.0.30319.1</_ProjectFileVersion>
-EOF
- $self->WriteAdditionalProperties($f, 'Debug');
+ <PropertyGroup Label="UserMacros" />
+ <PropertyGroup>
+ <_ProjectFileVersion>10.0.30319.1</_ProjectFileVersion>
+ EOF
+ $self->WriteAdditionalProperties($f, 'Debug');
$self->WriteAdditionalProperties($f, 'Release');
print $f <<EOF;
- </PropertyGroup>
-EOF
-
- $self->WriteItemDefinitionGroup(
- $f, 'Debug',
- {
- defs => "_DEBUG;DEBUG=1",
- opt => 'Disabled',
- strpool => 'false',
- runtime => 'MultiThreadedDebugDLL'
- });
+ </PropertyGroup>
+ EOF
+
+ $self->WriteItemDefinitionGroup(
+ $f, 'Debug',
+ {
+ defs => "_DEBUG;DEBUG=1",
+ opt => 'Disabled',
+ strpool => 'false',
+ runtime => 'MultiThreadedDebugDLL'
+ });
$self->WriteItemDefinitionGroup(
$f,
'Release',
@@ -102,19 +102,19 @@ sub WriteReferences
if (scalar(@references))
{
print $f <<EOF;
- <ItemGroup>
-EOF
- foreach my $ref (@references)
+ <ItemGroup>
+ EOF
+ foreach my $ref (@references)
{
print $f <<EOF;
- <ProjectReference Include="$ref->{name}$ref->{filenameExtension}">
- <Project>$ref->{guid}</Project>
- </ProjectReference>
-EOF
+ <ProjectReference Include="$ref->{name}$ref->{filenameExtension}">
+ <Project>$ref->{guid}</Project>
+ </ProjectReference>
+ EOF
}
print $f <<EOF;
- </ItemGroup>
-EOF
+ </ItemGroup>
+ EOF
}
return;
}
@@ -123,9 +123,9 @@ sub WriteFiles
{
my ($self, $f) = @_;
print $f <<EOF;
- <ItemGroup>
-EOF
- my @grammarFiles = ();
+ <ItemGroup>
+ EOF
+ my @grammarFiles = ();
my @resourceFiles = ();
my %uniquefiles;
foreach my $fileNameWithPath (sort keys %{ $self->{files} })
@@ -150,30 +150,30 @@ EOF
$obj =~ s!/!_!g;
print $f <<EOF;
- <ClCompile Include="$fileNameWithPath">
- <ObjectFileName Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">.\\debug\\$self->{name}\\${obj}_$fileName.obj</ObjectFileName>
- <ObjectFileName Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">.\\release\\$self->{name}\\${obj}_$fileName.obj</ObjectFileName>
- </ClCompile>
-EOF
+ <ClCompile Include="$fileNameWithPath">
+ <ObjectFileName Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">.\\debug\\$self->{name}\\${obj}_$fileName.obj</ObjectFileName>
+ <ObjectFileName Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">.\\release\\$self->{name}\\${obj}_$fileName.obj</ObjectFileName>
+ </ClCompile>
+ EOF
}
else
{
$uniquefiles{$fileName} = 1;
print $f <<EOF;
- <ClCompile Include="$fileNameWithPath" />
-EOF
+ <ClCompile Include="$fileNameWithPath" />
+ EOF
}
}
print $f <<EOF;
- </ItemGroup>
-EOF
- if (scalar(@grammarFiles))
+ </ItemGroup>
+ EOF
+ if (scalar(@grammarFiles))
{
print $f <<EOF;
- <ItemGroup>
-EOF
- foreach my $grammarFile (@grammarFiles)
+ <ItemGroup>
+ EOF
+ foreach my $grammarFile (@grammarFiles)
{
(my $outputFile = $grammarFile) =~ s/\.(y|l)$/.c/;
if ($grammarFile =~ /\.y$/)
@@ -181,52 +181,52 @@ EOF
$outputFile =~
s{^src\\pl\\plpgsql\\src\\gram.c$}{src\\pl\\plpgsql\\src\\pl_gram.c};
print $f <<EOF;
- <CustomBuild Include="$grammarFile">
- <Message Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">Running bison on $grammarFile</Message>
- <Command Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile"</Command>
- <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
- <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
- <Message Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">Running bison on $grammarFile</Message>
- <Command Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile"</Command>
- <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
- <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
- </CustomBuild>
-EOF
+ <CustomBuild Include="$grammarFile">
+ <Message Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">Running bison on $grammarFile</Message>
+ <Command Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile"</Command>
+ <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
+ <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
+ <Message Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">Running bison on $grammarFile</Message>
+ <Command Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">perl "src\\tools\\msvc\\pgbison.pl" "$grammarFile"</Command>
+ <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
+ <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
+ </CustomBuild>
+ EOF
}
else #if ($grammarFile =~ /\.l$/)
{
print $f <<EOF;
- <CustomBuild Include="$grammarFile">
- <Message Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">Running flex on $grammarFile</Message>
- <Command Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile"</Command>
- <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
- <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
- <Message Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">Running flex on $grammarFile</Message>
- <Command Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile"</Command>
- <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
- <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
- </CustomBuild>
-EOF
+ <CustomBuild Include="$grammarFile">
+ <Message Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">Running flex on $grammarFile</Message>
+ <Command Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile"</Command>
+ <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
+ <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Debug|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
+ <Message Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">Running flex on $grammarFile</Message>
+ <Command Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">perl "src\\tools\\msvc\\pgflex.pl" "$grammarFile"</Command>
+ <AdditionalInputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">%(AdditionalInputs)</AdditionalInputs>
+ <Outputs Condition="'\$(Configuration)|\$(Platform)'=='Release|$self->{platform}'">$outputFile;%(Outputs)</Outputs>
+ </CustomBuild>
+ EOF
}
}
print $f <<EOF;
- </ItemGroup>
-EOF
+ </ItemGroup>
+ EOF
}
if (scalar(@resourceFiles))
{
print $f <<EOF;
- <ItemGroup>
-EOF
- foreach my $rcFile (@resourceFiles)
+ <ItemGroup>
+ EOF
+ foreach my $rcFile (@resourceFiles)
{
print $f <<EOF;
- <ResourceCompile Include="$rcFile" />
-EOF
+ <ResourceCompile Include="$rcFile" />
+ EOF
}
print $f <<EOF;
- </ItemGroup>
-EOF
+ </ItemGroup>
+ EOF
}
return;
}
@@ -238,40 +238,40 @@ sub WriteConfigurationHeader
<ProjectConfiguration Include="$cfgname|$self->{platform}">
<Configuration>$cfgname</Configuration>
<Platform>$self->{platform}</Platform>
- </ProjectConfiguration>
-EOF
- return;
+ </ProjectConfiguration>
+ EOF
+ return;
}
sub WriteConfigurationPropertyGroup
{
my ($self, $f, $cfgname, $p) = @_;
my $cfgtype =
- ($self->{type} eq "exe")
- ? 'Application'
- : ($self->{type} eq "dll" ? 'DynamicLibrary' : 'StaticLibrary');
+ ($self->{type} eq "exe")
+ ? 'Application'
+ : ($self->{type} eq "dll" ? 'DynamicLibrary' : 'StaticLibrary');
print $f <<EOF;
- <PropertyGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'" Label="Configuration">
- <ConfigurationType>$cfgtype</ConfigurationType>
- <UseOfMfc>false</UseOfMfc>
- <CharacterSet>MultiByte</CharacterSet>
- <WholeProgramOptimization>$p->{wholeopt}</WholeProgramOptimization>
- <PlatformToolset>$self->{PlatformToolset}</PlatformToolset>
- </PropertyGroup>
-EOF
- return;
+ <PropertyGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'" Label="Configuration">
+ <ConfigurationType>$cfgtype</ConfigurationType>
+ <UseOfMfc>false</UseOfMfc>
+ <CharacterSet>MultiByte</CharacterSet>
+ <WholeProgramOptimization>$p->{wholeopt}</WholeProgramOptimization>
+ <PlatformToolset>$self->{PlatformToolset}</PlatformToolset>
+ </PropertyGroup>
+ EOF
+ return;
}
sub WritePropertySheetsPropertyGroup
{
my ($self, $f, $cfgname) = @_;
print $f <<EOF;
- <ImportGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'" Label="PropertySheets">
- <Import Project="\$(UserRootDir)\\Microsoft.Cpp.\$(Platform).user.props" Condition="exists('\$(UserRootDir)\\Microsoft.Cpp.\$(Platform).user.props')" Label="LocalAppDataPlatform" />
- </ImportGroup>
-EOF
- return;
+ <ImportGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'" Label="PropertySheets">
+ <Import Project="\$(UserRootDir)\\Microsoft.Cpp.\$(Platform).user.props" Condition="exists('\$(UserRootDir)\\Microsoft.Cpp.\$(Platform).user.props')" Label="LocalAppDataPlatform" />
+ </ImportGroup>
+ EOF
+ return;
}
sub WriteAdditionalProperties
@@ -279,19 +279,19 @@ sub WriteAdditionalProperties
my ($self, $f, $cfgname) = @_;
print $f <<EOF;
<OutDir Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">.\\$cfgname\\$self->{name}\\</OutDir>
- <IntDir Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">.\\$cfgname\\$self->{name}\\</IntDir>
- <LinkIncremental Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">false</LinkIncremental>
-EOF
- return;
+ <IntDir Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">.\\$cfgname\\$self->{name}\\</IntDir>
+ <LinkIncremental Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">false</LinkIncremental>
+ EOF
+ return;
}
sub WriteItemDefinitionGroup
{
my ($self, $f, $cfgname, $p) = @_;
my $cfgtype =
- ($self->{type} eq "exe")
- ? 'Application'
- : ($self->{type} eq "dll" ? 'DynamicLibrary' : 'StaticLibrary');
+ ($self->{type} eq "exe")
+ ? 'Application'
+ : ($self->{type} eq "dll" ? 'DynamicLibrary' : 'StaticLibrary');
my $libs = $self->GetAdditionalLinkerDependencies($cfgname, ';');
my $targetmachine =
@@ -303,8 +303,8 @@ sub WriteItemDefinitionGroup
$includes .= ';';
}
print $f <<EOF;
- <ItemDefinitionGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">
- <ClCompile>
+ <ItemDefinitionGroup Condition="'\$(Configuration)|\$(Platform)'=='$cfgname|$self->{platform}'">
+ <ClCompile>
<Optimization>$p->{opt}</Optimization>
<AdditionalIncludeDirectories>$self->{prefixincludes}src/include;src/include/port/win32;src/include/port/win32_msvc;$includes\%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;_WINDOWS;__WINDOWS__;__WIN32__;EXEC_BACKEND;WIN32_STACK_RLIMIT=4194304;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_DEPRECATE$self->{defines}$p->{defs}\%(PreprocessorDefinitions)</PreprocessorDefinitions>
@@ -322,8 +322,8 @@ sub WriteItemDefinitionGroup
<SuppressStartupBanner>true</SuppressStartupBanner>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
<CompileAs>Default</CompileAs>
- </ClCompile>
- <Link>
+ </ClCompile>
+ <Link>
<OutputFile>.\\$cfgname\\$self->{name}\\$self->{name}.$self->{type}</OutputFile>
<AdditionalDependencies>$libs;\%(AdditionalDependencies)</AdditionalDependencies>
<SuppressStartupBanner>true</SuppressStartupBanner>
@@ -339,8 +339,8 @@ sub WriteItemDefinitionGroup
<ImageHasSafeExceptionHandlers/>
<SubSystem>Console</SubSystem>
<TargetMachine>$targetmachine</TargetMachine>
-EOF
- if ($self->{disablelinkerwarnings})
+ EOF
+ if ($self->{disablelinkerwarnings})
{
print $f
" <AdditionalOptions>/ignore:$self->{disablelinkerwarnings} \%(AdditionalOptions)</AdditionalOptions>\n";
@@ -359,23 +359,23 @@ EOF
}
print $f <<EOF;
</Link>
- <ResourceCompile>
+ <ResourceCompile>
<AdditionalIncludeDirectories>src\\include;\%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
- </ResourceCompile>
-EOF
- if ($self->{builddef})
+ </ResourceCompile>
+ EOF
+ if ($self->{builddef})
{
print $f <<EOF;
- <PreLinkEvent>
- <Message>Generate DEF file</Message>
- <Command>perl src\\tools\\msvc\\gendef.pl $cfgname\\$self->{name} $self->{platform}</Command>
- </PreLinkEvent>
-EOF
+ <PreLinkEvent>
+ <Message>Generate DEF file</Message>
+ <Command>perl src\\tools\\msvc\\gendef.pl $cfgname\\$self->{name} $self->{platform}</Command>
+ </PreLinkEvent>
+ EOF
}
print $f <<EOF;
- </ItemDefinitionGroup>
-EOF
- return;
+ </ItemDefinitionGroup>
+ EOF
+ return;
}
sub Footer
@@ -384,12 +384,12 @@ sub Footer
$self->WriteReferences($f);
print $f <<EOF;
- <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.targets" />
- <ImportGroup Label="ExtensionTargets">
- </ImportGroup>
-</Project>
-EOF
- return;
+ <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.targets" />
+ <ImportGroup Label="ExtensionTargets">
+ </ImportGroup>
+ </Project>
+ EOF
+ return;
}
package VC2013Project;
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 2921d19..e040cd9 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -74,7 +74,7 @@ my $frontend_extraincludes = {
my $frontend_extrasource = {
'psql' => ['src/bin/psql/psqlscanslash.l'],
'pgbench' =>
- [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ]
+ [ 'src/bin/pgbench/exprscan.l', 'src/bin/pgbench/exprparse.y' ]
};
my @frontend_excludes = (
'pgevent', 'pg_basebackup', 'pg_rewind', 'pg_dump',
@@ -253,7 +253,7 @@ sub mkvcbuild
my $libpqwalreceiver =
$solution->AddProject('libpqwalreceiver', 'dll', '',
- 'src/backend/replication/libpqwalreceiver');
+ 'src/backend/replication/libpqwalreceiver');
$libpqwalreceiver->AddIncludeDir('src/interfaces/libpq');
$libpqwalreceiver->AddReference($postgres, $libpq);
@@ -388,7 +388,7 @@ sub mkvcbuild
# So remove their sources from the object, keeping the other setup that
# AddSimpleFrontend() has done.
my @nodumpall = grep { m!src/bin/pg_dump/.*\.c$! }
- keys %{ $pgdumpall->{files} };
+ keys %{ $pgdumpall->{files} };
delete @{ $pgdumpall->{files} }{@nodumpall};
$pgdumpall->{name} = 'pg_dumpall';
$pgdumpall->AddIncludeDir('src/backend');
@@ -534,7 +534,7 @@ sub mkvcbuild
# In this case, prefer .lib.
my @perl_libs =
reverse sort grep { /perl\d+\.lib$|libperl\d+\.a$/ }
- glob($perl_path);
+ glob($perl_path);
if (@perl_libs > 0)
{
$plperl->AddLibrary($perl_libs[0]);
@@ -675,7 +675,7 @@ sub mkvcbuild
print "CFLAGS recommended by Perl: $Config{ccflags}\n";
print "CFLAGS to compile embedded Perl: ",
- (join ' ', map { "-D$_" } @perl_embed_ccflags), "\n";
+ (join ' ', map { "-D$_" } @perl_embed_ccflags), "\n";
foreach my $f (@perl_embed_ccflags)
{
$plperl->AddDefine($f);
@@ -689,12 +689,12 @@ sub mkvcbuild
my $xsubppdir = first { -e "$_/ExtUtils/xsubpp" } (@INC);
print "Building $plperlsrc$xsc...\n";
system( $solution->{options}->{perl}
- . '/bin/perl '
- . "$xsubppdir/ExtUtils/xsubpp -typemap "
- . $solution->{options}->{perl}
- . '/lib/ExtUtils/typemap '
- . "$plperlsrc$xs "
- . ">$plperlsrc$xsc");
+ . '/bin/perl '
+ . "$xsubppdir/ExtUtils/xsubpp -typemap "
+ . $solution->{options}->{perl}
+ . '/lib/ExtUtils/typemap '
+ . "$plperlsrc$xs "
+ . ">$plperlsrc$xsc");
if ((!(-f "$plperlsrc$xsc")) || -z "$plperlsrc$xsc")
{
unlink("$plperlsrc$xsc"); # if zero size
@@ -713,11 +713,11 @@ sub mkvcbuild
my $basedir = getcwd;
chdir 'src/pl/plperl';
system( $solution->{options}->{perl}
- . '/bin/perl '
- . 'text2macro.pl '
- . '--strip="^(\#.*|\s*)$$" '
- . 'plc_perlboot.pl plc_trusted.pl '
- . '>perlchunks.h');
+ . '/bin/perl '
+ . 'text2macro.pl '
+ . '--strip="^(\#.*|\s*)$$" '
+ . 'plc_perlboot.pl plc_trusted.pl '
+ . '>perlchunks.h');
chdir $basedir;
if ((!(-f 'src/pl/plperl/perlchunks.h'))
|| -z 'src/pl/plperl/perlchunks.h')
@@ -734,9 +734,9 @@ sub mkvcbuild
my $basedir = getcwd;
chdir 'src/pl/plperl';
system( $solution->{options}->{perl}
- . '/bin/perl '
- . 'plperl_opmask.pl '
- . 'plperl_opmask.h');
+ . '/bin/perl '
+ . 'plperl_opmask.pl '
+ . 'plperl_opmask.h');
chdir $basedir;
if ((!(-f 'src/pl/plperl/plperl_opmask.h'))
|| -z 'src/pl/plperl/plperl_opmask.h')
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index 80fa459..abb74e1 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -47,14 +47,14 @@ sub _new
unless grep { $_ == $options->{blocksize} } (1, 2, 4, 8, 16, 32);
$options->{segsize} = 1
unless $options->{segsize}; # undef or 0 means default
- # only allow segsize 1 for now, as we can't do large files yet in windows
+ # only allow segsize 1 for now, as we can't do large files yet in windows
die "Bad segsize $options->{segsize}"
unless $options->{segsize} == 1;
$options->{wal_blocksize} = 8
unless $options->{wal_blocksize}; # undef or 0 means default
die "Bad wal_blocksize $options->{wal_blocksize}"
unless grep { $_ == $options->{wal_blocksize} }
- (1, 2, 4, 8, 16, 32, 64);
+ (1, 2, 4, 8, 16, 32, 64);
$options->{wal_segsize} = 16
unless $options->{wal_segsize}; # undef or 0 means default
die "Bad wal_segsize $options->{wal_segsize}"
@@ -178,12 +178,12 @@ sub GenerateFiles
print $o "#define ENABLE_NLS 1\n" if ($self->{options}->{nls});
print $o "#define BLCKSZ ", 1024 * $self->{options}->{blocksize},
- "\n";
+ "\n";
print $o "#define RELSEG_SIZE ",
- (1024 / $self->{options}->{blocksize}) *
+ (1024 / $self->{options}->{blocksize}) *
$self->{options}->{segsize} * 1024, "\n";
print $o "#define XLOG_BLCKSZ ",
- 1024 * $self->{options}->{wal_blocksize}, "\n";
+ 1024 * $self->{options}->{wal_blocksize}, "\n";
if ($self->{options}->{float4byval})
{
@@ -429,13 +429,13 @@ sub GenerateFiles
open(my $o, '>', 'src/interfaces/ecpg/include/ecpg_config.h')
|| confess "Could not open ecpg_config.h";
print $o <<EOF;
-#if (_MSC_VER > 1200)
-#define HAVE_LONG_LONG_INT 1
-#define HAVE_LONG_LONG_INT_64 1
-#endif
-#define ENABLE_THREAD_SAFETY 1
-EOF
- close($o);
+ #if (_MSC_VER > 1200)
+ #define HAVE_LONG_LONG_INT 1
+ #define HAVE_LONG_LONG_INT_64 1
+ #endif
+ #define ENABLE_THREAD_SAFETY 1
+ EOF
+ close($o);
}
unless (-f "src/port/pg_config_paths.h")
@@ -444,20 +444,20 @@ EOF
open(my $o, '>', 'src/port/pg_config_paths.h')
|| confess "Could not open pg_config_paths.h";
print $o <<EOF;
-#define PGBINDIR "/bin"
-#define PGSHAREDIR "/share"
-#define SYSCONFDIR "/etc"
-#define INCLUDEDIR "/include"
-#define PKGINCLUDEDIR "/include"
-#define INCLUDEDIRSERVER "/include/server"
-#define LIBDIR "/lib"
-#define PKGLIBDIR "/lib"
-#define LOCALEDIR "/share/locale"
-#define DOCDIR "/doc"
-#define HTMLDIR "/doc"
-#define MANDIR "/man"
-EOF
- close($o);
+ #define PGBINDIR "/bin"
+ #define PGSHAREDIR "/share"
+ #define SYSCONFDIR "/etc"
+ #define INCLUDEDIR "/include"
+ #define PKGINCLUDEDIR "/include"
+ #define INCLUDEDIRSERVER "/include/server"
+ #define LIBDIR "/lib"
+ #define PKGLIBDIR "/lib"
+ #define LOCALEDIR "/share/locale"
+ #define DOCDIR "/doc"
+ #define HTMLDIR "/doc"
+ #define MANDIR "/man"
+ EOF
+ close($o);
}
my $mf = Project::read_file('src/backend/catalog/Makefile');
@@ -485,10 +485,10 @@ EOF
}
$need_genbki = 1
if IsNewer('src/backend/catalog/bki-stamp',
- 'src/backend/catalog/genbki.pl');
+ 'src/backend/catalog/genbki.pl');
$need_genbki = 1
if IsNewer('src/backend/catalog/bki-stamp',
- 'src/backend/catalog/Catalog.pm');
+ 'src/backend/catalog/Catalog.pm');
if ($need_genbki)
{
chdir('src/backend/catalog');
@@ -528,10 +528,10 @@ EOF
open(my $o, '>', "doc/src/sgml/version.sgml")
|| croak "Could not write to version.sgml\n";
print $o <<EOF;
-<!ENTITY version "$self->{strver}">
-<!ENTITY majorversion "$self->{majorver}">
-EOF
- close($o);
+ <!ENTITY version "$self->{strver}">
+ <!ENTITY majorversion "$self->{majorver}">
+ EOF
+ close($o);
return;
}
@@ -659,62 +659,62 @@ sub Save
open(my $sln, '>', "pgsql.sln") || croak "Could not write to pgsql.sln\n";
print $sln <<EOF;
-Microsoft Visual Studio Solution File, Format Version $self->{solutionFileVersion}
-# $self->{visualStudioName}
-EOF
+ Microsoft Visual Studio Solution File, Format Version $self->{solutionFileVersion}
+ # $self->{visualStudioName}
+ EOF
- print $sln $self->GetAdditionalHeaders();
+ print $sln $self->GetAdditionalHeaders();
foreach my $fld (keys %{ $self->{projects} })
{
foreach my $proj (@{ $self->{projects}->{$fld} })
{
print $sln <<EOF;
-Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "$proj->{name}", "$proj->{name}$proj->{filenameExtension}", "$proj->{guid}"
-EndProject
-EOF
+ Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "$proj->{name}", "$proj->{name}$proj->{filenameExtension}", "$proj->{guid}"
+ EndProject
+ EOF
}
if ($fld ne "")
{
$flduid{$fld} = Win32::GuidGen();
print $sln <<EOF;
-Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "$fld", "$fld", "$flduid{$fld}"
-EndProject
-EOF
+ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "$fld", "$fld", "$flduid{$fld}"
+ EndProject
+ EOF
}
}
print $sln <<EOF;
-Global
- GlobalSection(SolutionConfigurationPlatforms) = preSolution
- Debug|$self->{platform}= Debug|$self->{platform}
- Release|$self->{platform} = Release|$self->{platform}
+ Global
+ GlobalSection(SolutionConfigurationPlatforms) = preSolution
+ Debug|$self->{platform}= Debug|$self->{platform}
+ Release|$self->{platform} = Release|$self->{platform}
EndGlobalSection
- GlobalSection(ProjectConfigurationPlatforms) = postSolution
-EOF
+ GlobalSection(ProjectConfigurationPlatforms) = postSolution
+ EOF
- foreach my $fld (keys %{ $self->{projects} })
+ foreach my $fld (keys %{ $self->{projects} })
{
foreach my $proj (@{ $self->{projects}->{$fld} })
{
print $sln <<EOF;
- $proj->{guid}.Debug|$self->{platform}.ActiveCfg = Debug|$self->{platform}
- $proj->{guid}.Debug|$self->{platform}.Build.0 = Debug|$self->{platform}
- $proj->{guid}.Release|$self->{platform}.ActiveCfg = Release|$self->{platform}
- $proj->{guid}.Release|$self->{platform}.Build.0 = Release|$self->{platform}
-EOF
+ $proj->{guid}.Debug|$self->{platform}.ActiveCfg = Debug|$self->{platform}
+ $proj->{guid}.Debug|$self->{platform}.Build.0 = Debug|$self->{platform}
+ $proj->{guid}.Release|$self->{platform}.ActiveCfg = Release|$self->{platform}
+ $proj->{guid}.Release|$self->{platform}.Build.0 = Release|$self->{platform}
+ EOF
}
}
print $sln <<EOF;
EndGlobalSection
- GlobalSection(SolutionProperties) = preSolution
- HideSolutionNode = FALSE
- EndGlobalSection
- GlobalSection(NestedProjects) = preSolution
-EOF
+ GlobalSection(SolutionProperties) = preSolution
+ HideSolutionNode = FALSE
+ EndGlobalSection
+ GlobalSection(NestedProjects) = preSolution
+ EOF
- foreach my $fld (keys %{ $self->{projects} })
+ foreach my $fld (keys %{ $self->{projects} })
{
next if ($fld eq "");
foreach my $proj (@{ $self->{projects}->{$fld} })
@@ -725,9 +725,9 @@ EOF
print $sln <<EOF;
EndGlobalSection
-EndGlobal
-EOF
- close($sln);
+ EndGlobal
+ EOF
+ close($sln);
return;
}
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
index d7a9fc5..dce5efe 100644
--- a/src/tools/msvc/config_default.pl
+++ b/src/tools/msvc/config_default.pl
@@ -4,7 +4,7 @@ use warnings;
our $config = {
asserts => 0, # --enable-cassert
- # float4byval=>1, # --disable-float4-byval, on by default
+ # float4byval=>1, # --disable-float4-byval, on by default
# float8byval=> $platformbits == 64, # --disable-float8-byval,
# off by default on 32 bit platforms, on by default on 64 bit platforms
diff --git a/src/tools/msvc/gendef.pl b/src/tools/msvc/gendef.pl
index 77c3a77..84ce103 100644
--- a/src/tools/msvc/gendef.pl
+++ b/src/tools/msvc/gendef.pl
@@ -152,14 +152,14 @@ sub writedef
sub usage
{
die( "Usage: gendef.pl <modulepath> <platform>\n"
- . " modulepath: path to dir with obj files, no trailing slash"
- . " platform: Win32 | x64");
+ . " modulepath: path to dir with obj files, no trailing slash"
+ . " platform: Win32 | x64");
}
usage()
unless scalar(@ARGV) == 2
&& ( ($ARGV[0] =~ /\\([^\\]+$)/)
- && ($ARGV[1] eq 'Win32' || $ARGV[1] eq 'x64'));
+ && ($ARGV[1] eq 'Win32' || $ARGV[1] eq 'x64'));
my $defname = uc $1;
my $deffile = "$ARGV[0]/$defname.def";
my $platform = $ARGV[1];
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index 2e53c73..6954945 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -38,7 +38,7 @@ if (-e "src/tools/msvc/buildenv.pl")
my $what = shift || "";
if ($what =~
/^(check|installcheck|plcheck|contribcheck|modulescheck|ecpgcheck|isolationcheck|upgradecheck|bincheck|recoverycheck|taptest)$/i
- )
+)
{
$what = uc $what;
}
@@ -291,8 +291,8 @@ sub mangle_plpython3
s/LANGUAGE plpython2?u/LANGUAGE plpython3u/g;
s/EXTENSION ([^ ]*_)*plpython2?u/EXTENSION $1plpython3u/g;
s/installing required extension "plpython2u"/installing required extension "plpython3u"/g;
- }
- for ($contents);
+ }
+ for ($contents);
my $base = basename $file;
open($handle, '>', "$dir/python3/$base")
|| die "opening python 3 file for $file";
@@ -302,7 +302,7 @@ sub mangle_plpython3
}
}
do { s!^!python3/!; }
- foreach (@$tests);
+ foreach (@$tests);
return @$tests;
}
@@ -553,7 +553,7 @@ sub upgradecheck
# Install does a chdir, so change back after that
chdir $cwd;
my ($bindir, $libdir, $oldsrc, $newsrc) =
- ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
+ ("$upg_tmp_install/bin", "$upg_tmp_install/lib", $topdir, $topdir);
$ENV{PATH} = "$bindir;$ENV{PATH}";
my $data = "$tmp_root/data";
$ENV{PGDATA} = "$data.old";
@@ -636,8 +636,8 @@ sub fetchRegressOpts
# an unhandled variable reference. Ignore anything that isn't an
# option starting with "--".
@opts = grep { !/\$\(/ && /^--/ }
- map { (my $x = $_) =~ s/\Q$(top_builddir)\E/\"$topdir\"/; $x; }
- split(/\s+/, $1);
+ map { (my $x = $_) =~ s/\Q$(top_builddir)\E/\"$topdir\"/; $x; }
+ split(/\s+/, $1);
}
if ($m =~ /^\s*ENCODING\s*=\s*(\S+)/m)
{
@@ -686,11 +686,11 @@ sub fetchTests
my $cftests =
$config->{openssl}
- ? GetTests("OSSL_TESTS", $m)
+ ? GetTests("OSSL_TESTS", $m)
: GetTests("INT_TESTS", $m);
my $pgptests =
$config->{zlib}
- ? GetTests("ZLIB_TST", $m)
+ ? GetTests("ZLIB_TST", $m)
: GetTests("ZLIB_OFF_TST", $m);
$t =~ s/\$\(CF_TESTS\)/$cftests/;
$t =~ s/\$\(CF_PGP_TESTS\)/$pgptests/;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index 2d81672..7d4f068 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -317,9 +317,9 @@ sub diff
$post_fh->close();
system( "diff $flags "
- . $pre_fh->filename . " "
- . $post_fh->filename
- . " >&2");
+ . $pre_fh->filename . " "
+ . $post_fh->filename
+ . " >&2");
return;
}
diff --git a/src/tools/version_stamp.pl b/src/tools/version_stamp.pl
index 41f5d76..dc33c73 100755
--- a/src/tools/version_stamp.pl
+++ b/src/tools/version_stamp.pl
@@ -84,7 +84,7 @@ open(my $fh, '<', "configure.in") || die "could not read configure.in: $!\n";
while (<$fh>)
{
if (m/^m4_if\(m4_defn\(\[m4_PACKAGE_VERSION\]\), \[(.*)\], \[\], \[m4_fatal/
- )
+ )
{
$aconfver = $1;
last;
@@ -108,21 +108,21 @@ sed_file("doc/bug.template",
sed_file("src/include/pg_config.h.win32",
"-e 's/#define PACKAGE_STRING \"PostgreSQL .*\"/#define PACKAGE_STRING \"PostgreSQL $fullversion\"/' "
- . "-e 's/#define PACKAGE_VERSION \".*\"/#define PACKAGE_VERSION \"$fullversion\"/' "
- . "-e 's/#define PG_VERSION \".*\"/#define PG_VERSION \"$fullversion\"/' "
- . "-e 's/#define PG_VERSION_NUM .*/#define PG_VERSION_NUM $padnumericversion/'"
+ . "-e 's/#define PACKAGE_VERSION \".*\"/#define PACKAGE_VERSION \"$fullversion\"/' "
+ . "-e 's/#define PG_VERSION \".*\"/#define PG_VERSION \"$fullversion\"/' "
+ . "-e 's/#define PG_VERSION_NUM .*/#define PG_VERSION_NUM $padnumericversion/'"
);
sed_file("src/interfaces/libpq/libpq.rc.in",
"-e 's/FILEVERSION [0-9]*,[0-9]*,[0-9]*,0/FILEVERSION $majorversion,0,$numericminor,0/' "
- . "-e 's/PRODUCTVERSION [0-9]*,[0-9]*,[0-9]*,0/PRODUCTVERSION $majorversion,0,$numericminor,0/' "
- . "-e 's/VALUE \"FileVersion\", \"[0-9.]*/VALUE \"FileVersion\", \"$numericversion/' "
- . "-e 's/VALUE \"ProductVersion\", \"[0-9.]*/VALUE \"ProductVersion\", \"$numericversion/'"
+ . "-e 's/PRODUCTVERSION [0-9]*,[0-9]*,[0-9]*,0/PRODUCTVERSION $majorversion,0,$numericminor,0/' "
+ . "-e 's/VALUE \"FileVersion\", \"[0-9.]*/VALUE \"FileVersion\", \"$numericversion/' "
+ . "-e 's/VALUE \"ProductVersion\", \"[0-9.]*/VALUE \"ProductVersion\", \"$numericversion/'"
);
sed_file("src/port/win32ver.rc",
"-e 's/FILEVERSION [0-9]*,[0-9]*,[0-9]*,0/FILEVERSION $majorversion,0,$numericminor,0/' "
- . "-e 's/PRODUCTVERSION [0-9]*,[0-9]*,[0-9]*,0/PRODUCTVERSION $majorversion,0,$numericminor,0/'"
+ . "-e 's/PRODUCTVERSION [0-9]*,[0-9]*,[0-9]*,0/PRODUCTVERSION $majorversion,0,$numericminor,0/'"
);
print "Stamped these files with version number $fullversion:\n$fixedfiles";
diff --git a/src/tools/win32tzlist.pl b/src/tools/win32tzlist.pl
index 0fb561b..d5af5b3 100755
--- a/src/tools/win32tzlist.pl
+++ b/src/tools/win32tzlist.pl
@@ -47,11 +47,11 @@ foreach my $keyname (@subkeys)
die "Incomplete timezone data for $keyname!\n"
unless ($vals{Std} && $vals{Dlt} && $vals{Display});
push @system_zones,
- {
+ {
'std' => $vals{Std}->[2],
'dlt' => $vals{Dlt}->[2],
'display' => clean_displayname($vals{Display}->[2]),
- };
+ };
}
$basekey->Close();
@@ -77,12 +77,12 @@ while ($pgtz =~
m/{\s+"([^"]+)",\s+"([^"]+)",\s+"([^"]+)",?\s+},\s+\/\*(.+?)\*\//gs)
{
push @file_zones,
- {
+ {
'std' => $1,
'dlt' => $2,
'match' => $3,
'display' => clean_displayname($4),
- };
+ };
}
#
On 1/8/19 8:44 PM, Noah Misch wrote:
On Tue, Jan 08, 2019 at 08:17:43AM -0500, Andrew Dunstan wrote:
On 1/3/19 12:53 AM, Noah Misch wrote:
If I run perltidy on 60d9979, then run perl-mode indent, the diff between the
perltidy run and perl-mode indent run is:
129 files changed, 8468 insertions(+), 8468 deletions(-)
If I add (perl-continued-brace-offset . -2):
119 files changed, 3515 insertions(+), 3515 deletions(-)
If I add (perl-indent-continued-arguments . 4) as well:
86 files changed, 2626 insertions(+), 2626 deletions(-)
If I add (perl-indent-parens-as-block . t) as well:
65 files changed, 2373 insertions(+), 2373 deletions(-)Sounds good. What do the remaining diffs look like?
I've attached them. Most involve statement continuation in some form. For
example, src/backend/utils/mb/Unicode has numerous instances where perl-mode
indents hashref-constructor curly braces as though they were code blocks.
Other diff lines involve labels. Others are in string literals.
On a very quick glance I notice some things that looked just wrong, and
some that were at best dubious. It's a pity that we can't get closer to
what perltidy does, but +1 for applying your changes.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services