language cleanups in code and docs
Hi,
We've removed the use of "slave" from most of the repo (one use
remained, included here), but we didn't do the same for master. In the
attached series I replaced most of the uses.
0001: tap tests: s/master/primary/
Pretty clear cut imo.
0002: code: s/master/primary/
This also includes a few minor other changes (s/in master/on the
primary/, a few 'the's added). Perhaps it'd be better to do those
separately?
0003: code: s/master/leader/
This feels pretty obvious. We've largely used the leader / worker
terminology, but there were a few uses of master left.
0004: code: s/master/$other/
This is most of the remaining uses of master in code. A number of
references to 'master' in the context of toast, a few uses of 'master
copy'. I guess some of these are a bit less clear cut.
0005: docs: s/master/primary/
These seem mostly pretty straightforward to me. The changes in
high-availability.sgml probably deserve the most attention.
0006: docs: s/master/root/
Here using root seems a lot better than master anyway (master seems
confusing in regard to inheritance scenarios). But perhaps parent
would be better? Went with root since it's about the topmost table.
0007: docs: s/master/supervisor/
I guess this could be a bit more contentious. Supervisor seems clearer
to me, but I can see why people would disagree. See also later point
about changes I have not done at this stage.
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.
After this series there are only two widespread use of 'master' in the
tree.
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.
Greetings,
Andres Freund
Attachments:
v1-0001-tap-tests-s-master-primary.patchtext/x-diff; charset=us-asciiDownload
From 8ac04b91f007b3dd6e5a3fd39216d54a01ce7ab4 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 14 Jun 2020 11:47:37 -0700
Subject: [PATCH v1 1/8] tap tests: s/master/primary/
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/bin/pg_rewind/t/001_basic.pl | 76 +++---
src/bin/pg_rewind/t/002_databases.pl | 24 +-
src/bin/pg_rewind/t/003_extrafiles.pl | 54 ++---
src/bin/pg_rewind/t/004_pg_xlog_symlink.pl | 40 ++--
src/bin/pg_rewind/t/005_same_timeline.pl | 2 +-
src/bin/pg_rewind/t/RewindTest.pm | 134 +++++------
src/bin/pg_verifybackup/t/002_algorithm.pl | 14 +-
src/bin/pg_verifybackup/t/003_corruption.pl | 12 +-
src/bin/pg_verifybackup/t/004_options.pl | 10 +-
src/bin/pg_verifybackup/t/006_encoding.pl | 10 +-
src/bin/pg_verifybackup/t/007_wal.pl | 12 +-
contrib/bloom/t/001_wal.pl | 40 ++--
src/test/authentication/t/001_password.pl | 4 +-
src/test/authentication/t/002_saslprep.pl | 4 +-
src/test/modules/commit_ts/t/002_standby.pl | 40 ++--
src/test/modules/commit_ts/t/003_standby_2.pl | 36 +--
src/test/modules/commit_ts/t/004_restart.pl | 64 ++---
.../test_misc/t/001_constraint_validation.pl | 2 +-
src/test/perl/PostgresNode.pm | 14 +-
src/test/perl/README | 2 +-
src/test/recovery/t/001_stream_rep.pl | 122 +++++-----
src/test/recovery/t/002_archiving.pl | 26 +-
src/test/recovery/t/003_recovery_targets.pl | 60 ++---
src/test/recovery/t/004_timeline_switch.pl | 26 +-
src/test/recovery/t/005_replay_delay.pl | 28 +--
src/test/recovery/t/006_logical_decoding.pl | 78 +++---
src/test/recovery/t/007_sync_rep.pl | 70 +++---
src/test/recovery/t/008_fsm_truncation.pl | 30 +--
src/test/recovery/t/009_twophase.pl | 222 +++++++++---------
.../t/010_logical_decoding_timelines.pl | 64 ++---
src/test/recovery/t/011_crash_recovery.pl | 2 +-
src/test/recovery/t/012_subtransactions.pl | 84 +++----
src/test/recovery/t/013_crash_restart.pl | 2 +-
src/test/recovery/t/019_replslot_limit.pl | 132 +++++------
src/test/recovery/t/020_archive_status.pl | 2 +-
src/test/ssl/t/001_ssltests.pl | 2 +-
src/test/ssl/t/002_scram.pl | 2 +-
37 files changed, 773 insertions(+), 773 deletions(-)
diff --git a/src/bin/pg_rewind/t/001_basic.pl b/src/bin/pg_rewind/t/001_basic.pl
index d97e4377419..fb4a0acd965 100644
--- a/src/bin/pg_rewind/t/001_basic.pl
+++ b/src/bin/pg_rewind/t/001_basic.pl
@@ -13,58 +13,58 @@ sub run_test
my $test_mode = shift;
RewindTest::setup_cluster($test_mode);
- RewindTest::start_master();
+ RewindTest::start_primary();
- # Create a test table and insert a row in master.
- master_psql("CREATE TABLE tbl1 (d text)");
- master_psql("INSERT INTO tbl1 VALUES ('in master')");
+ # Create a test table and insert a row in primary.
+ primary_psql("CREATE TABLE tbl1 (d text)");
+ primary_psql("INSERT INTO tbl1 VALUES ('in primary')");
# This test table will be used to test truncation, i.e. the table
- # is extended in the old master after promotion
- master_psql("CREATE TABLE trunc_tbl (d text)");
- master_psql("INSERT INTO trunc_tbl VALUES ('in master')");
+ # is extended in the old primary after promotion
+ primary_psql("CREATE TABLE trunc_tbl (d text)");
+ primary_psql("INSERT INTO trunc_tbl VALUES ('in primary')");
# This test table will be used to test the "copy-tail" case, i.e. the
- # table is truncated in the old master after promotion
- master_psql("CREATE TABLE tail_tbl (id integer, d text)");
- master_psql("INSERT INTO tail_tbl VALUES (0, 'in master')");
+ # table is truncated in the old primary after promotion
+ primary_psql("CREATE TABLE tail_tbl (id integer, d text)");
+ primary_psql("INSERT INTO tail_tbl VALUES (0, 'in primary')");
- master_psql("CHECKPOINT");
+ primary_psql("CHECKPOINT");
RewindTest::create_standby($test_mode);
- # Insert additional data on master that will be replicated to standby
- master_psql("INSERT INTO tbl1 values ('in master, before promotion')");
- master_psql(
- "INSERT INTO trunc_tbl values ('in master, before promotion')");
- master_psql(
- "INSERT INTO tail_tbl SELECT g, 'in master, before promotion: ' || g FROM generate_series(1, 10000) g"
+ # Insert additional data on primary that will be replicated to standby
+ primary_psql("INSERT INTO tbl1 values ('in primary, before promotion')");
+ primary_psql(
+ "INSERT INTO trunc_tbl values ('in primary, before promotion')");
+ primary_psql(
+ "INSERT INTO tail_tbl SELECT g, 'in primary, before promotion: ' || g FROM generate_series(1, 10000) g"
);
- master_psql('CHECKPOINT');
+ primary_psql('CHECKPOINT');
RewindTest::promote_standby();
- # Insert a row in the old master. This causes the master and standby
+ # Insert a row in the old primary. This causes the primary and standby
# to have "diverged", it's no longer possible to just apply the
- # standy's logs over master directory - you need to rewind.
- master_psql("INSERT INTO tbl1 VALUES ('in master, after promotion')");
+ # standy's logs over primary directory - you need to rewind.
+ primary_psql("INSERT INTO tbl1 VALUES ('in primary, after promotion')");
# Also insert a new row in the standby, which won't be present in the
- # old master.
+ # old primary.
standby_psql("INSERT INTO tbl1 VALUES ('in standby, after promotion')");
# Insert enough rows to trunc_tbl to extend the file. pg_rewind should
# truncate it back to the old size.
- master_psql(
- "INSERT INTO trunc_tbl SELECT 'in master, after promotion: ' || g FROM generate_series(1, 10000) g"
+ primary_psql(
+ "INSERT INTO trunc_tbl SELECT 'in primary, after promotion: ' || g FROM generate_series(1, 10000) g"
);
# Truncate tail_tbl. pg_rewind should copy back the truncated part
# (We cannot use an actual TRUNCATE command here, as that creates a
# whole new relfilenode)
- master_psql("DELETE FROM tail_tbl WHERE id > 10");
- master_psql("VACUUM tail_tbl");
+ primary_psql("DELETE FROM tail_tbl WHERE id > 10");
+ primary_psql("VACUUM tail_tbl");
# Before running pg_rewind, do a couple of extra tests with several
# option combinations. As the code paths taken by those tests
@@ -72,7 +72,7 @@ sub run_test
# in "local" mode for simplicity's sake.
if ($test_mode eq 'local')
{
- my $master_pgdata = $node_master->data_dir;
+ my $primary_pgdata = $node_primary->data_dir;
my $standby_pgdata = $node_standby->data_dir;
# First check that pg_rewind fails if the target cluster is
@@ -82,7 +82,7 @@ sub run_test
[
'pg_rewind', '--debug',
'--source-pgdata', $standby_pgdata,
- '--target-pgdata', $master_pgdata,
+ '--target-pgdata', $primary_pgdata,
'--no-sync'
],
'pg_rewind with running target');
@@ -94,7 +94,7 @@ sub run_test
[
'pg_rewind', '--debug',
'--source-pgdata', $standby_pgdata,
- '--target-pgdata', $master_pgdata,
+ '--target-pgdata', $primary_pgdata,
'--no-sync', '--no-ensure-shutdown'
],
'pg_rewind --no-ensure-shutdown with running target');
@@ -102,12 +102,12 @@ sub run_test
# Stop the target, and attempt to run with a local source
# still running. This fails as pg_rewind requires to have
# a source cleanly stopped.
- $node_master->stop;
+ $node_primary->stop;
command_fails(
[
'pg_rewind', '--debug',
'--source-pgdata', $standby_pgdata,
- '--target-pgdata', $master_pgdata,
+ '--target-pgdata', $primary_pgdata,
'--no-sync', '--no-ensure-shutdown'
],
'pg_rewind with unexpected running source');
@@ -121,30 +121,30 @@ sub run_test
[
'pg_rewind', '--debug',
'--source-pgdata', $standby_pgdata,
- '--target-pgdata', $master_pgdata,
+ '--target-pgdata', $primary_pgdata,
'--no-sync', '--dry-run'
],
'pg_rewind --dry-run');
# Both clusters need to be alive moving forward.
$node_standby->start;
- $node_master->start;
+ $node_primary->start;
}
RewindTest::run_pg_rewind($test_mode);
check_query(
'SELECT * FROM tbl1',
- qq(in master
-in master, before promotion
+ qq(in primary
+in primary, before promotion
in standby, after promotion
),
'table content');
check_query(
'SELECT * FROM trunc_tbl',
- qq(in master
-in master, before promotion
+ qq(in primary
+in primary, before promotion
),
'truncation');
@@ -160,7 +160,7 @@ in master, before promotion
skip "unix-style permissions not supported on Windows", 1
if ($windows_os);
- ok(check_mode_recursive($node_master->data_dir(), 0700, 0600),
+ ok(check_mode_recursive($node_primary->data_dir(), 0700, 0600),
'check PGDATA permissions');
}
diff --git a/src/bin/pg_rewind/t/002_databases.pl b/src/bin/pg_rewind/t/002_databases.pl
index 1db534c0dc0..5506fe425bc 100644
--- a/src/bin/pg_rewind/t/002_databases.pl
+++ b/src/bin/pg_rewind/t/002_databases.pl
@@ -13,26 +13,26 @@ sub run_test
my $test_mode = shift;
RewindTest::setup_cluster($test_mode, ['-g']);
- RewindTest::start_master();
+ RewindTest::start_primary();
- # Create a database in master with a table.
- master_psql('CREATE DATABASE inmaster');
- master_psql('CREATE TABLE inmaster_tab (a int)', 'inmaster');
+ # Create a database in primary with a table.
+ primary_psql('CREATE DATABASE inprimary');
+ primary_psql('CREATE TABLE inprimary_tab (a int)', 'inprimary');
RewindTest::create_standby($test_mode);
# Create another database with another table, the creation is
# replicated to the standby.
- master_psql('CREATE DATABASE beforepromotion');
- master_psql('CREATE TABLE beforepromotion_tab (a int)',
+ primary_psql('CREATE DATABASE beforepromotion');
+ primary_psql('CREATE TABLE beforepromotion_tab (a int)',
'beforepromotion');
RewindTest::promote_standby();
- # Create databases in the old master and the new promoted standby.
- master_psql('CREATE DATABASE master_afterpromotion');
- master_psql('CREATE TABLE master_promotion_tab (a int)',
- 'master_afterpromotion');
+ # Create databases in the old primary and the new promoted standby.
+ primary_psql('CREATE DATABASE primary_afterpromotion');
+ primary_psql('CREATE TABLE primary_promotion_tab (a int)',
+ 'primary_afterpromotion');
standby_psql('CREATE DATABASE standby_afterpromotion');
standby_psql('CREATE TABLE standby_promotion_tab (a int)',
'standby_afterpromotion');
@@ -45,7 +45,7 @@ sub run_test
check_query(
'SELECT datname FROM pg_database ORDER BY 1',
qq(beforepromotion
-inmaster
+inprimary
postgres
standby_afterpromotion
template0
@@ -59,7 +59,7 @@ template1
skip "unix-style permissions not supported on Windows", 1
if ($windows_os);
- ok(check_mode_recursive($node_master->data_dir(), 0750, 0640),
+ ok(check_mode_recursive($node_primary->data_dir(), 0750, 0640),
'check PGDATA permissions');
}
diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl
index f4710440fc3..48849fb49aa 100644
--- a/src/bin/pg_rewind/t/003_extrafiles.pl
+++ b/src/bin/pg_rewind/t/003_extrafiles.pl
@@ -18,21 +18,21 @@ sub run_test
my $test_mode = shift;
RewindTest::setup_cluster($test_mode);
- RewindTest::start_master();
+ RewindTest::start_primary();
- my $test_master_datadir = $node_master->data_dir;
+ my $test_primary_datadir = $node_primary->data_dir;
# Create a subdir and files that will be present in both
- mkdir "$test_master_datadir/tst_both_dir";
- append_to_file "$test_master_datadir/tst_both_dir/both_file1", "in both1";
- append_to_file "$test_master_datadir/tst_both_dir/both_file2", "in both2";
- mkdir "$test_master_datadir/tst_both_dir/both_subdir/";
- append_to_file "$test_master_datadir/tst_both_dir/both_subdir/both_file3",
+ mkdir "$test_primary_datadir/tst_both_dir";
+ append_to_file "$test_primary_datadir/tst_both_dir/both_file1", "in both1";
+ append_to_file "$test_primary_datadir/tst_both_dir/both_file2", "in both2";
+ mkdir "$test_primary_datadir/tst_both_dir/both_subdir/";
+ append_to_file "$test_primary_datadir/tst_both_dir/both_subdir/both_file3",
"in both3";
RewindTest::create_standby($test_mode);
- # Create different subdirs and files in master and standby
+ # Create different subdirs and files in primary and standby
my $test_standby_datadir = $node_standby->data_dir;
mkdir "$test_standby_datadir/tst_standby_dir";
@@ -45,15 +45,15 @@ sub run_test
"$test_standby_datadir/tst_standby_dir/standby_subdir/standby_file3",
"in standby3";
- mkdir "$test_master_datadir/tst_master_dir";
- append_to_file "$test_master_datadir/tst_master_dir/master_file1",
- "in master1";
- append_to_file "$test_master_datadir/tst_master_dir/master_file2",
- "in master2";
- mkdir "$test_master_datadir/tst_master_dir/master_subdir/";
+ mkdir "$test_primary_datadir/tst_primary_dir";
+ append_to_file "$test_primary_datadir/tst_primary_dir/primary_file1",
+ "in primary1";
+ append_to_file "$test_primary_datadir/tst_primary_dir/primary_file2",
+ "in primary2";
+ mkdir "$test_primary_datadir/tst_primary_dir/primary_subdir/";
append_to_file
- "$test_master_datadir/tst_master_dir/master_subdir/master_file3",
- "in master3";
+ "$test_primary_datadir/tst_primary_dir/primary_subdir/primary_file3",
+ "in primary3";
RewindTest::promote_standby();
RewindTest::run_pg_rewind($test_mode);
@@ -65,21 +65,21 @@ sub run_test
push @paths, $File::Find::name
if $File::Find::name =~ m/.*tst_.*/;
},
- $test_master_datadir);
+ $test_primary_datadir);
@paths = sort @paths;
is_deeply(
\@paths,
[
- "$test_master_datadir/tst_both_dir",
- "$test_master_datadir/tst_both_dir/both_file1",
- "$test_master_datadir/tst_both_dir/both_file2",
- "$test_master_datadir/tst_both_dir/both_subdir",
- "$test_master_datadir/tst_both_dir/both_subdir/both_file3",
- "$test_master_datadir/tst_standby_dir",
- "$test_master_datadir/tst_standby_dir/standby_file1",
- "$test_master_datadir/tst_standby_dir/standby_file2",
- "$test_master_datadir/tst_standby_dir/standby_subdir",
- "$test_master_datadir/tst_standby_dir/standby_subdir/standby_file3"
+ "$test_primary_datadir/tst_both_dir",
+ "$test_primary_datadir/tst_both_dir/both_file1",
+ "$test_primary_datadir/tst_both_dir/both_file2",
+ "$test_primary_datadir/tst_both_dir/both_subdir",
+ "$test_primary_datadir/tst_both_dir/both_subdir/both_file3",
+ "$test_primary_datadir/tst_standby_dir",
+ "$test_primary_datadir/tst_standby_dir/standby_file1",
+ "$test_primary_datadir/tst_standby_dir/standby_file2",
+ "$test_primary_datadir/tst_standby_dir/standby_subdir",
+ "$test_primary_datadir/tst_standby_dir/standby_subdir/standby_file3"
],
"file lists match");
diff --git a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
index 639eeb9c910..3813543ee1c 100644
--- a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
+++ b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
@@ -26,50 +26,50 @@ sub run_test
{
my $test_mode = shift;
- my $master_xlogdir = "${TestLib::tmp_check}/xlog_master";
+ my $primary_xlogdir = "${TestLib::tmp_check}/xlog_primary";
- rmtree($master_xlogdir);
+ rmtree($primary_xlogdir);
RewindTest::setup_cluster($test_mode);
- my $test_master_datadir = $node_master->data_dir;
+ my $test_primary_datadir = $node_primary->data_dir;
# turn pg_wal into a symlink
- print("moving $test_master_datadir/pg_wal to $master_xlogdir\n");
- move("$test_master_datadir/pg_wal", $master_xlogdir) or die;
- symlink($master_xlogdir, "$test_master_datadir/pg_wal") or die;
+ print("moving $test_primary_datadir/pg_wal to $primary_xlogdir\n");
+ move("$test_primary_datadir/pg_wal", $primary_xlogdir) or die;
+ symlink($primary_xlogdir, "$test_primary_datadir/pg_wal") or die;
- RewindTest::start_master();
+ RewindTest::start_primary();
- # Create a test table and insert a row in master.
- master_psql("CREATE TABLE tbl1 (d text)");
- master_psql("INSERT INTO tbl1 VALUES ('in master')");
+ # Create a test table and insert a row in primary.
+ primary_psql("CREATE TABLE tbl1 (d text)");
+ primary_psql("INSERT INTO tbl1 VALUES ('in primary')");
- master_psql("CHECKPOINT");
+ primary_psql("CHECKPOINT");
RewindTest::create_standby($test_mode);
- # Insert additional data on master that will be replicated to standby
- master_psql("INSERT INTO tbl1 values ('in master, before promotion')");
+ # Insert additional data on primary that will be replicated to standby
+ primary_psql("INSERT INTO tbl1 values ('in primary, before promotion')");
- master_psql('CHECKPOINT');
+ primary_psql('CHECKPOINT');
RewindTest::promote_standby();
- # Insert a row in the old master. This causes the master and standby
+ # Insert a row in the old primary. This causes the primary and standby
# to have "diverged", it's no longer possible to just apply the
- # standy's logs over master directory - you need to rewind.
- master_psql("INSERT INTO tbl1 VALUES ('in master, after promotion')");
+ # standy's logs over primary directory - you need to rewind.
+ primary_psql("INSERT INTO tbl1 VALUES ('in primary, after promotion')");
# Also insert a new row in the standby, which won't be present in the
- # old master.
+ # old primary.
standby_psql("INSERT INTO tbl1 VALUES ('in standby, after promotion')");
RewindTest::run_pg_rewind($test_mode);
check_query(
'SELECT * FROM tbl1',
- qq(in master
-in master, before promotion
+ qq(in primary
+in primary, before promotion
in standby, after promotion
),
'table content');
diff --git a/src/bin/pg_rewind/t/005_same_timeline.pl b/src/bin/pg_rewind/t/005_same_timeline.pl
index 5464f4203a7..8706d5aed5c 100644
--- a/src/bin/pg_rewind/t/005_same_timeline.pl
+++ b/src/bin/pg_rewind/t/005_same_timeline.pl
@@ -13,7 +13,7 @@ use lib $FindBin::RealBin;
use RewindTest;
RewindTest::setup_cluster();
-RewindTest::start_master();
+RewindTest::start_primary();
RewindTest::create_standby();
RewindTest::run_pg_rewind('local');
RewindTest::clean_rewind_test();
diff --git a/src/bin/pg_rewind/t/RewindTest.pm b/src/bin/pg_rewind/t/RewindTest.pm
index 7dabf395e10..149b99159d0 100644
--- a/src/bin/pg_rewind/t/RewindTest.pm
+++ b/src/bin/pg_rewind/t/RewindTest.pm
@@ -2,31 +2,31 @@ package RewindTest;
# Test driver for pg_rewind. Each test consists of a cycle where a new cluster
# is first created with initdb, and a streaming replication standby is set up
-# to follow the master. Then the master is shut down and the standby is
-# promoted, and finally pg_rewind is used to rewind the old master, using the
+# to follow the primary. Then the primary is shut down and the standby is
+# promoted, and finally pg_rewind is used to rewind the old primary, using the
# standby as the source.
#
# To run a test, the test script (in t/ subdirectory) calls the functions
# in this module. These functions should be called in this sequence:
#
-# 1. setup_cluster - creates a PostgreSQL cluster that runs as the master
+# 1. setup_cluster - creates a PostgreSQL cluster that runs as the primary
#
-# 2. start_master - starts the master server
+# 2. start_primary - starts the primary server
#
# 3. create_standby - runs pg_basebackup to initialize a standby server, and
-# sets it up to follow the master.
+# sets it up to follow the primary.
#
# 4. promote_standby - runs "pg_ctl promote" to promote the standby server.
-# The old master keeps running.
+# The old primary keeps running.
#
-# 5. run_pg_rewind - stops the old master (if it's still running) and runs
+# 5. run_pg_rewind - stops the old primary (if it's still running) and runs
# pg_rewind to synchronize it with the now-promoted standby server.
#
# 6. clean_rewind_test - stops both servers used in the test, if they're
# still running.
#
-# The test script can use the helper functions master_psql and standby_psql
-# to run psql against the master and standby servers, respectively.
+# The test script can use the helper functions primary_psql and standby_psql
+# to run psql against the primary and standby servers, respectively.
use strict;
use warnings;
@@ -43,15 +43,15 @@ use TestLib;
use Test::More;
our @EXPORT = qw(
- $node_master
+ $node_primary
$node_standby
- master_psql
+ primary_psql
standby_psql
check_query
setup_cluster
- start_master
+ start_primary
create_standby
promote_standby
run_pg_rewind
@@ -59,16 +59,16 @@ our @EXPORT = qw(
);
# Our nodes.
-our $node_master;
+our $node_primary;
our $node_standby;
-sub master_psql
+sub primary_psql
{
my $cmd = shift;
my $dbname = shift || 'postgres';
system_or_bail 'psql', '-q', '--no-psqlrc', '-d',
- $node_master->connstr($dbname), '-c', "$cmd";
+ $node_primary->connstr($dbname), '-c', "$cmd";
return;
}
@@ -82,7 +82,7 @@ sub standby_psql
return;
}
-# Run a query against the master, and check that the output matches what's
+# Run a query against the primary, and check that the output matches what's
# expected
sub check_query
{
@@ -94,7 +94,7 @@ sub check_query
# we want just the output, no formatting
my $result = run [
'psql', '-q', '-A', '-t', '--no-psqlrc', '-d',
- $node_master->connstr('postgres'),
+ $node_primary->connstr('postgres'),
'-c', $query
],
'>', \$stdout, '2>', \$stderr;
@@ -123,34 +123,34 @@ sub setup_cluster
my $extra_name = shift; # Used to differentiate clusters
my $extra = shift; # Extra params for initdb
- # Initialize master, data checksums are mandatory
- $node_master =
- get_new_node('master' . ($extra_name ? "_${extra_name}" : ''));
+ # Initialize primary, data checksums are mandatory
+ $node_primary =
+ get_new_node('primary' . ($extra_name ? "_${extra_name}" : ''));
# Set up pg_hba.conf and pg_ident.conf for the role running
# pg_rewind. This role is used for all the tests, and has
# minimal permissions enough to rewind from an online source.
- $node_master->init(
+ $node_primary->init(
allows_streaming => 1,
extra => $extra,
auth_extra => [ '--create-role', 'rewind_user' ]);
# Set wal_keep_segments to prevent WAL segment recycling after enforced
# checkpoints in the tests.
- $node_master->append_conf(
+ $node_primary->append_conf(
'postgresql.conf', qq(
wal_keep_segments = 20
));
return;
}
-sub start_master
+sub start_primary
{
- $node_master->start;
+ $node_primary->start;
# Create custom role which is used to run pg_rewind, and adjust its
# permissions to the minimum necessary.
- $node_master->safe_psql(
+ $node_primary->safe_psql(
'postgres', "
CREATE ROLE rewind_user LOGIN;
GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)
@@ -162,7 +162,7 @@ sub start_master
GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint, bigint, boolean)
TO rewind_user;");
- #### Now run the test-specific parts to initialize the master before setting
+ #### Now run the test-specific parts to initialize the primary before setting
# up standby
return;
@@ -174,13 +174,13 @@ sub create_standby
$node_standby =
get_new_node('standby' . ($extra_name ? "_${extra_name}" : ''));
- $node_master->backup('my_backup');
- $node_standby->init_from_backup($node_master, 'my_backup');
- my $connstr_master = $node_master->connstr();
+ $node_primary->backup('my_backup');
+ $node_standby->init_from_backup($node_primary, 'my_backup');
+ my $connstr_primary = $node_primary->connstr();
$node_standby->append_conf(
"postgresql.conf", qq(
-primary_conninfo='$connstr_master'
+primary_conninfo='$connstr_primary'
));
$node_standby->set_standby_mode();
@@ -200,10 +200,10 @@ sub promote_standby
# up standby
# Wait for the standby to receive and write all WAL.
- $node_master->wait_for_catchup($node_standby, 'write');
+ $node_primary->wait_for_catchup($node_standby, 'write');
- # Now promote standby and insert some new data on master, this will put
- # the master out-of-sync with the standby.
+ # Now promote standby and insert some new data on primary, this will put
+ # the primary out-of-sync with the standby.
$node_standby->promote;
# Force a checkpoint after the promotion. pg_rewind looks at the control
@@ -220,7 +220,7 @@ sub promote_standby
sub run_pg_rewind
{
my $test_mode = shift;
- my $master_pgdata = $node_master->data_dir;
+ my $primary_pgdata = $node_primary->data_dir;
my $standby_pgdata = $node_standby->data_dir;
my $standby_connstr = $node_standby->connstr('postgres');
my $tmp_folder = TestLib::tempdir;
@@ -239,14 +239,14 @@ sub run_pg_rewind
# segments but that would just make the test more costly,
# without improving the coverage. Hence, instead, stop
# gracefully the primary here.
- $node_master->stop;
+ $node_primary->stop;
}
else
{
- # Stop the master and be ready to perform the rewind. The cluster
+ # Stop the primary and be ready to perform the rewind. The cluster
# needs recovery to finish once, and pg_rewind makes sure that it
# happens automatically.
- $node_master->stop('immediate');
+ $node_primary->stop('immediate');
}
# At this point, the rewind processing is ready to run.
@@ -254,25 +254,25 @@ sub run_pg_rewind
# The real testing begins really now with a bifurcation of the possible
# scenarios that pg_rewind supports.
- # Keep a temporary postgresql.conf for master node or it would be
+ # Keep a temporary postgresql.conf for primary node or it would be
# overwritten during the rewind.
copy(
- "$master_pgdata/postgresql.conf",
- "$tmp_folder/master-postgresql.conf.tmp");
+ "$primary_pgdata/postgresql.conf",
+ "$tmp_folder/primary-postgresql.conf.tmp");
# Now run pg_rewind
if ($test_mode eq "local")
{
# Do rewind using a local pgdata as source
- # Stop the master and be ready to perform the rewind
+ # Stop the primary and be ready to perform the rewind
$node_standby->stop;
command_ok(
[
'pg_rewind',
"--debug",
"--source-pgdata=$standby_pgdata",
- "--target-pgdata=$master_pgdata",
+ "--target-pgdata=$primary_pgdata",
"--no-sync"
],
'pg_rewind local');
@@ -285,19 +285,19 @@ sub run_pg_rewind
[
'pg_rewind', "--debug",
"--source-server", $standby_connstr,
- "--target-pgdata=$master_pgdata", "--no-sync",
+ "--target-pgdata=$primary_pgdata", "--no-sync",
"--write-recovery-conf"
],
'pg_rewind remote');
# Check that standby.signal is here as recovery configuration
# was requested.
- ok( -e "$master_pgdata/standby.signal",
+ ok( -e "$primary_pgdata/standby.signal",
'standby.signal created after pg_rewind');
# Now, when pg_rewind apparently succeeded with minimal permissions,
# add REPLICATION privilege. So we could test that new standby
- # is able to connect to the new master with generated config.
+ # is able to connect to the new primary with generated config.
$node_standby->safe_psql('postgres',
"ALTER ROLE rewind_user WITH REPLICATION;");
}
@@ -305,30 +305,30 @@ sub run_pg_rewind
{
# Do rewind using a local pgdata as source and specified
- # directory with target WAL archive. The old master has
+ # directory with target WAL archive. The old primary has
# to be stopped at this point.
# Remove the existing archive directory and move all WAL
- # segments from the old master to the archives. These
+ # segments from the old primary to the archives. These
# will be used by pg_rewind.
- rmtree($node_master->archive_dir);
- RecursiveCopy::copypath($node_master->data_dir . "/pg_wal",
- $node_master->archive_dir);
+ rmtree($node_primary->archive_dir);
+ RecursiveCopy::copypath($node_primary->data_dir . "/pg_wal",
+ $node_primary->archive_dir);
# Fast way to remove entire directory content
- rmtree($node_master->data_dir . "/pg_wal");
- mkdir($node_master->data_dir . "/pg_wal");
+ rmtree($node_primary->data_dir . "/pg_wal");
+ mkdir($node_primary->data_dir . "/pg_wal");
# Make sure that directories have the right umask as this is
# required by a follow-up check on permissions, and better
# safe than sorry.
- chmod(0700, $node_master->archive_dir);
- chmod(0700, $node_master->data_dir . "/pg_wal");
+ chmod(0700, $node_primary->archive_dir);
+ chmod(0700, $node_primary->data_dir . "/pg_wal");
# Add appropriate restore_command to the target cluster
- $node_master->enable_restoring($node_master, 0);
+ $node_primary->enable_restoring($node_primary, 0);
- # Stop the new master and be ready to perform the rewind.
+ # Stop the new primary and be ready to perform the rewind.
$node_standby->stop;
# Note the use of --no-ensure-shutdown here. WAL files are
@@ -339,7 +339,7 @@ sub run_pg_rewind
'pg_rewind',
"--debug",
"--source-pgdata=$standby_pgdata",
- "--target-pgdata=$master_pgdata",
+ "--target-pgdata=$primary_pgdata",
"--no-sync",
"--no-ensure-shutdown",
"--restore-target-wal"
@@ -355,28 +355,28 @@ sub run_pg_rewind
# Now move back postgresql.conf with old settings
move(
- "$tmp_folder/master-postgresql.conf.tmp",
- "$master_pgdata/postgresql.conf");
+ "$tmp_folder/primary-postgresql.conf.tmp",
+ "$primary_pgdata/postgresql.conf");
chmod(
- $node_master->group_access() ? 0640 : 0600,
- "$master_pgdata/postgresql.conf")
+ $node_primary->group_access() ? 0640 : 0600,
+ "$primary_pgdata/postgresql.conf")
or BAIL_OUT(
- "unable to set permissions for $master_pgdata/postgresql.conf");
+ "unable to set permissions for $primary_pgdata/postgresql.conf");
# Plug-in rewound node to the now-promoted standby node
if ($test_mode ne "remote")
{
my $port_standby = $node_standby->port;
- $node_master->append_conf(
+ $node_primary->append_conf(
'postgresql.conf', qq(
primary_conninfo='port=$port_standby'));
- $node_master->set_standby_mode();
+ $node_primary->set_standby_mode();
}
- # Restart the master to check that rewind went correctly
- $node_master->start;
+ # Restart the primary to check that rewind went correctly
+ $node_primary->start;
#### Now run the test-specific parts to check the result
@@ -386,7 +386,7 @@ primary_conninfo='port=$port_standby'));
# Clean up after the test. Stop both servers, if they're still running.
sub clean_rewind_test
{
- $node_master->teardown_node if defined $node_master;
+ $node_primary->teardown_node if defined $node_primary;
$node_standby->teardown_node if defined $node_standby;
return;
}
diff --git a/src/bin/pg_verifybackup/t/002_algorithm.pl b/src/bin/pg_verifybackup/t/002_algorithm.pl
index d0c97ae3cc3..6c118832668 100644
--- a/src/bin/pg_verifybackup/t/002_algorithm.pl
+++ b/src/bin/pg_verifybackup/t/002_algorithm.pl
@@ -9,13 +9,13 @@ use PostgresNode;
use TestLib;
use Test::More tests => 19;
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->start;
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->start;
for my $algorithm (qw(bogus none crc32c sha224 sha256 sha384 sha512))
{
- my $backup_path = $master->backup_dir . '/' . $algorithm;
+ my $backup_path = $primary->backup_dir . '/' . $algorithm;
my @backup = (
'pg_basebackup', '-D', $backup_path,
'--manifest-checksums', $algorithm, '--no-sync');
@@ -24,13 +24,13 @@ for my $algorithm (qw(bogus none crc32c sha224 sha256 sha384 sha512))
# A backup with a bogus algorithm should fail.
if ($algorithm eq 'bogus')
{
- $master->command_fails(\@backup,
+ $primary->command_fails(\@backup,
"backup fails with algorithm \"$algorithm\"");
next;
}
# A backup with a valid algorithm should work.
- $master->command_ok(\@backup, "backup ok with algorithm \"$algorithm\"");
+ $primary->command_ok(\@backup, "backup ok with algorithm \"$algorithm\"");
# We expect each real checksum algorithm to be mentioned on every line of
# the backup manifest file except the first and last; for simplicity, we
@@ -50,7 +50,7 @@ for my $algorithm (qw(bogus none crc32c sha224 sha256 sha384 sha512))
}
# Make sure that it verifies OK.
- $master->command_ok(\@verify,
+ $primary->command_ok(\@verify,
"verify backup with algorithm \"$algorithm\"");
# Remove backup immediately to save disk space.
diff --git a/src/bin/pg_verifybackup/t/003_corruption.pl b/src/bin/pg_verifybackup/t/003_corruption.pl
index c2e04d0be20..0c0691ba2b2 100644
--- a/src/bin/pg_verifybackup/t/003_corruption.pl
+++ b/src/bin/pg_verifybackup/t/003_corruption.pl
@@ -9,9 +9,9 @@ use PostgresNode;
use TestLib;
use Test::More tests => 44;
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->start;
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->start;
# Include a user-defined tablespace in the hopes of detecting problems in that
# area.
@@ -19,7 +19,7 @@ my $source_ts_path = TestLib::perl2host(TestLib::tempdir_short());
my $source_ts_prefix = $source_ts_path;
$source_ts_prefix =~ s!(^[A-Z]:/[^/]*)/.*!$1!;
-$master->safe_psql('postgres', <<EOM);
+$primary->safe_psql('postgres', <<EOM);
CREATE TABLE x1 (a int);
INSERT INTO x1 VALUES (111);
CREATE TABLESPACE ts1 LOCATION '$source_ts_path';
@@ -103,13 +103,13 @@ for my $scenario (@scenario)
if $scenario->{'skip_on_windows'} && $windows_os;
# Take a backup and check that it verifies OK.
- my $backup_path = $master->backup_dir . '/' . $name;
+ my $backup_path = $primary->backup_dir . '/' . $name;
my $backup_ts_path = TestLib::perl2host(TestLib::tempdir_short());
# The tablespace map parameter confuses Msys2, which tries to mangle
# it. Tell it not to.
# See https://www.msys2.org/wiki/Porting/#filesystem-namespaces
local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;
- $master->command_ok(
+ $primary->command_ok(
[
'pg_basebackup', '-D', $backup_path, '--no-sync',
'-T', "${source_ts_path}=${backup_ts_path}"
diff --git a/src/bin/pg_verifybackup/t/004_options.pl b/src/bin/pg_verifybackup/t/004_options.pl
index 271b7ee5043..1bd0aab5459 100644
--- a/src/bin/pg_verifybackup/t/004_options.pl
+++ b/src/bin/pg_verifybackup/t/004_options.pl
@@ -10,11 +10,11 @@ use TestLib;
use Test::More tests => 25;
# Start up the server and take a backup.
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->start;
-my $backup_path = $master->backup_dir . '/test_options';
-$master->command_ok([ 'pg_basebackup', '-D', $backup_path, '--no-sync' ],
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->start;
+my $backup_path = $primary->backup_dir . '/test_options';
+$primary->command_ok([ 'pg_basebackup', '-D', $backup_path, '--no-sync' ],
"base backup ok");
# Verify that pg_verifybackup -q succeeds and produces no output.
diff --git a/src/bin/pg_verifybackup/t/006_encoding.pl b/src/bin/pg_verifybackup/t/006_encoding.pl
index 5ab9649ab6f..35b854a78e8 100644
--- a/src/bin/pg_verifybackup/t/006_encoding.pl
+++ b/src/bin/pg_verifybackup/t/006_encoding.pl
@@ -8,11 +8,11 @@ use PostgresNode;
use TestLib;
use Test::More tests => 5;
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->start;
-my $backup_path = $master->backup_dir . '/test_encoding';
-$master->command_ok(
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->start;
+my $backup_path = $primary->backup_dir . '/test_encoding';
+$primary->command_ok(
[
'pg_basebackup', '-D',
$backup_path, '--no-sync',
diff --git a/src/bin/pg_verifybackup/t/007_wal.pl b/src/bin/pg_verifybackup/t/007_wal.pl
index 56d536675c9..23a4f8bbd8d 100644
--- a/src/bin/pg_verifybackup/t/007_wal.pl
+++ b/src/bin/pg_verifybackup/t/007_wal.pl
@@ -10,16 +10,16 @@ use TestLib;
use Test::More tests => 7;
# Start up the server and take a backup.
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->start;
-my $backup_path = $master->backup_dir . '/test_wal';
-$master->command_ok([ 'pg_basebackup', '-D', $backup_path, '--no-sync' ],
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->start;
+my $backup_path = $primary->backup_dir . '/test_wal';
+$primary->command_ok([ 'pg_basebackup', '-D', $backup_path, '--no-sync' ],
"base backup ok");
# Rename pg_wal.
my $original_pg_wal = $backup_path . '/pg_wal';
-my $relocated_pg_wal = $master->backup_dir . '/relocated_pg_wal';
+my $relocated_pg_wal = $primary->backup_dir . '/relocated_pg_wal';
rename($original_pg_wal, $relocated_pg_wal) || die "rename pg_wal: $!";
# WAL verification should fail.
diff --git a/contrib/bloom/t/001_wal.pl b/contrib/bloom/t/001_wal.pl
index 0f2628b5575..7f6398f5712 100644
--- a/contrib/bloom/t/001_wal.pl
+++ b/contrib/bloom/t/001_wal.pl
@@ -5,10 +5,10 @@ use PostgresNode;
use TestLib;
use Test::More tests => 31;
-my $node_master;
+my $node_primary;
my $node_standby;
-# Run few queries on both master and standby and check their results match.
+# Run few queries on both primary and standby and check their results match.
sub test_index_replay
{
my ($test_name) = @_;
@@ -17,7 +17,7 @@ sub test_index_replay
my $applname = $node_standby->name;
my $caughtup_query =
"SELECT pg_current_wal_lsn() <= write_lsn FROM pg_stat_replication WHERE application_name = '$applname';";
- $node_master->poll_query_until('postgres', $caughtup_query)
+ $node_primary->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby 1 to catch up";
my $queries = qq(SET enable_seqscan=off;
@@ -32,35 +32,35 @@ SELECT * FROM tst WHERE i = 7 AND t = 'e';
);
# Run test queries and compare their result
- my $master_result = $node_master->safe_psql("postgres", $queries);
+ my $primary_result = $node_primary->safe_psql("postgres", $queries);
my $standby_result = $node_standby->safe_psql("postgres", $queries);
- is($master_result, $standby_result, "$test_name: query result matches");
+ is($primary_result, $standby_result, "$test_name: query result matches");
return;
}
-# Initialize master node
-$node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->start;
+# Initialize primary node
+$node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->start;
my $backup_name = 'my_backup';
# Take backup
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
-# Create streaming standby linking to master
+# Create streaming standby linking to primary
$node_standby = get_new_node('standby');
-$node_standby->init_from_backup($node_master, $backup_name,
+$node_standby->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby->start;
-# Create some bloom index on master
-$node_master->safe_psql("postgres", "CREATE EXTENSION bloom;");
-$node_master->safe_psql("postgres", "CREATE TABLE tst (i int4, t text);");
-$node_master->safe_psql("postgres",
+# Create some bloom index on primary
+$node_primary->safe_psql("postgres", "CREATE EXTENSION bloom;");
+$node_primary->safe_psql("postgres", "CREATE TABLE tst (i int4, t text);");
+$node_primary->safe_psql("postgres",
"INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series(1,100000) i;"
);
-$node_master->safe_psql("postgres",
+$node_primary->safe_psql("postgres",
"CREATE INDEX bloomidx ON tst USING bloom (i, t) WITH (col1 = 3);");
# Test that queries give same result
@@ -69,12 +69,12 @@ test_index_replay('initial');
# Run 10 cycles of table modification. Run test queries after each modification.
for my $i (1 .. 10)
{
- $node_master->safe_psql("postgres", "DELETE FROM tst WHERE i = $i;");
+ $node_primary->safe_psql("postgres", "DELETE FROM tst WHERE i = $i;");
test_index_replay("delete $i");
- $node_master->safe_psql("postgres", "VACUUM tst;");
+ $node_primary->safe_psql("postgres", "VACUUM tst;");
test_index_replay("vacuum $i");
my ($start, $end) = (100001 + ($i - 1) * 10000, 100000 + $i * 10000);
- $node_master->safe_psql("postgres",
+ $node_primary->safe_psql("postgres",
"INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series($start,$end) i;"
);
test_index_replay("insert $i");
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 82536eb60fb..1305de0051a 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -51,8 +51,8 @@ sub test_role
return;
}
-# Initialize master node
-my $node = get_new_node('master');
+# Initialize primary node
+my $node = get_new_node('primary');
$node->init;
$node->start;
diff --git a/src/test/authentication/t/002_saslprep.pl b/src/test/authentication/t/002_saslprep.pl
index 32d4e43fc7d..0aaab090ec5 100644
--- a/src/test/authentication/t/002_saslprep.pl
+++ b/src/test/authentication/t/002_saslprep.pl
@@ -49,9 +49,9 @@ sub test_login
return;
}
-# Initialize master node. Force UTF-8 encoding, so that we can use non-ASCII
+# Initialize primary node. Force UTF-8 encoding, so that we can use non-ASCII
# characters in the passwords below.
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init(extra => [ '--locale=C', '--encoding=UTF8' ]);
$node->start;
diff --git a/src/test/modules/commit_ts/t/002_standby.pl b/src/test/modules/commit_ts/t/002_standby.pl
index f376b595962..872efb2e8ea 100644
--- a/src/test/modules/commit_ts/t/002_standby.pl
+++ b/src/test/modules/commit_ts/t/002_standby.pl
@@ -8,45 +8,45 @@ use Test::More tests => 4;
use PostgresNode;
my $bkplabel = 'backup';
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
-$master->append_conf(
+$primary->append_conf(
'postgresql.conf', qq{
track_commit_timestamp = on
max_wal_senders = 5
});
-$master->start;
-$master->backup($bkplabel);
+$primary->start;
+$primary->backup($bkplabel);
my $standby = get_new_node('standby');
-$standby->init_from_backup($master, $bkplabel, has_streaming => 1);
+$standby->init_from_backup($primary, $bkplabel, has_streaming => 1);
$standby->start;
for my $i (1 .. 10)
{
- $master->safe_psql('postgres', "create table t$i()");
+ $primary->safe_psql('postgres', "create table t$i()");
}
-my $master_ts = $master->safe_psql('postgres',
+my $primary_ts = $primary->safe_psql('postgres',
qq{SELECT ts.* FROM pg_class, pg_xact_commit_timestamp(xmin) AS ts WHERE relname = 't10'}
);
-my $master_lsn =
- $master->safe_psql('postgres', 'select pg_current_wal_lsn()');
+my $primary_lsn =
+ $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');
$standby->poll_query_until('postgres',
- qq{SELECT '$master_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
+ qq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
or die "standby never caught up";
my $standby_ts = $standby->safe_psql('postgres',
qq{select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = 't10'}
);
-is($master_ts, $standby_ts, "standby gives same value as master");
+is($primary_ts, $standby_ts, "standby gives same value as primary");
-$master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
-$master->restart;
-$master->safe_psql('postgres', 'checkpoint');
-$master_lsn = $master->safe_psql('postgres', 'select pg_current_wal_lsn()');
+$primary->append_conf('postgresql.conf', 'track_commit_timestamp = off');
+$primary->restart;
+$primary->safe_psql('postgres', 'checkpoint');
+$primary_lsn = $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');
$standby->poll_query_until('postgres',
- qq{SELECT '$master_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
+ qq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
or die "standby never caught up";
$standby->safe_psql('postgres', 'checkpoint');
@@ -54,10 +54,10 @@ $standby->safe_psql('postgres', 'checkpoint');
my ($ret, $standby_ts_stdout, $standby_ts_stderr) = $standby->psql('postgres',
'select ts.* from pg_class, pg_xact_commit_timestamp(xmin) ts where relname = \'t10\''
);
-is($ret, 3, 'standby errors when master turned feature off');
+is($ret, 3, 'standby errors when primary turned feature off');
is($standby_ts_stdout, '',
- "standby gives no value when master turned feature off");
+ "standby gives no value when primary turned feature off");
like(
$standby_ts_stderr,
qr/could not get commit timestamp data/,
- 'expected error when master turned feature off');
+ 'expected error when primary turned feature off');
diff --git a/src/test/modules/commit_ts/t/003_standby_2.pl b/src/test/modules/commit_ts/t/003_standby_2.pl
index 9165d500536..36ab829dfdd 100644
--- a/src/test/modules/commit_ts/t/003_standby_2.pl
+++ b/src/test/modules/commit_ts/t/003_standby_2.pl
@@ -1,4 +1,4 @@
-# Test master/standby scenario where the track_commit_timestamp GUC is
+# Test primary/standby scenario where the track_commit_timestamp GUC is
# repeatedly toggled on and off.
use strict;
use warnings;
@@ -8,31 +8,31 @@ use Test::More tests => 4;
use PostgresNode;
my $bkplabel = 'backup';
-my $master = get_new_node('master');
-$master->init(allows_streaming => 1);
-$master->append_conf(
+my $primary = get_new_node('primary');
+$primary->init(allows_streaming => 1);
+$primary->append_conf(
'postgresql.conf', qq{
track_commit_timestamp = on
max_wal_senders = 5
});
-$master->start;
-$master->backup($bkplabel);
+$primary->start;
+$primary->backup($bkplabel);
my $standby = get_new_node('standby');
-$standby->init_from_backup($master, $bkplabel, has_streaming => 1);
+$standby->init_from_backup($primary, $bkplabel, has_streaming => 1);
$standby->start;
for my $i (1 .. 10)
{
- $master->safe_psql('postgres', "create table t$i()");
+ $primary->safe_psql('postgres', "create table t$i()");
}
-$master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
-$master->restart;
-$master->safe_psql('postgres', 'checkpoint');
-my $master_lsn =
- $master->safe_psql('postgres', 'select pg_current_wal_lsn()');
+$primary->append_conf('postgresql.conf', 'track_commit_timestamp = off');
+$primary->restart;
+$primary->safe_psql('postgres', 'checkpoint');
+my $primary_lsn =
+ $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');
$standby->poll_query_until('postgres',
- qq{SELECT '$master_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
+ qq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})
or die "standby never caught up";
$standby->safe_psql('postgres', 'checkpoint');
@@ -49,10 +49,10 @@ like(
qr/could not get commit timestamp data/,
'expected err msg after restart');
-$master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
-$master->restart;
-$master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
-$master->restart;
+$primary->append_conf('postgresql.conf', 'track_commit_timestamp = on');
+$primary->restart;
+$primary->append_conf('postgresql.conf', 'track_commit_timestamp = off');
+$primary->restart;
system_or_bail('pg_ctl', '-D', $standby->data_dir, 'promote');
diff --git a/src/test/modules/commit_ts/t/004_restart.pl b/src/test/modules/commit_ts/t/004_restart.pl
index 39ca25a06bf..4e6ae776b97 100644
--- a/src/test/modules/commit_ts/t/004_restart.pl
+++ b/src/test/modules/commit_ts/t/004_restart.pl
@@ -5,15 +5,15 @@ use PostgresNode;
use TestLib;
use Test::More tests => 16;
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
-$node_master->start;
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->append_conf('postgresql.conf', 'track_commit_timestamp = on');
+$node_primary->start;
my ($ret, $stdout, $stderr);
($ret, $stdout, $stderr) =
- $node_master->psql('postgres', qq[SELECT pg_xact_commit_timestamp('0');]);
+ $node_primary->psql('postgres', qq[SELECT pg_xact_commit_timestamp('0');]);
is($ret, 3, 'getting ts of InvalidTransactionId reports error');
like(
$stderr,
@@ -21,27 +21,27 @@ like(
'expected error from InvalidTransactionId');
($ret, $stdout, $stderr) =
- $node_master->psql('postgres', qq[SELECT pg_xact_commit_timestamp('1');]);
+ $node_primary->psql('postgres', qq[SELECT pg_xact_commit_timestamp('1');]);
is($ret, 0, 'getting ts of BootstrapTransactionId succeeds');
is($stdout, '', 'timestamp of BootstrapTransactionId is null');
($ret, $stdout, $stderr) =
- $node_master->psql('postgres', qq[SELECT pg_xact_commit_timestamp('2');]);
+ $node_primary->psql('postgres', qq[SELECT pg_xact_commit_timestamp('2');]);
is($ret, 0, 'getting ts of FrozenTransactionId succeeds');
is($stdout, '', 'timestamp of FrozenTransactionId is null');
# Since FirstNormalTransactionId will've occurred during initdb, long before we
# enabled commit timestamps, it'll be null since we have no cts data for it but
# cts are enabled.
-is( $node_master->safe_psql(
+is( $node_primary->safe_psql(
'postgres', qq[SELECT pg_xact_commit_timestamp('3');]),
'',
'committs for FirstNormalTransactionId is null');
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[CREATE TABLE committs_test(x integer, y timestamp with time zone);]);
-my $xid = $node_master->safe_psql(
+my $xid = $node_primary->safe_psql(
'postgres', qq[
BEGIN;
INSERT INTO committs_test(x, y) VALUES (1, current_timestamp);
@@ -49,43 +49,43 @@ my $xid = $node_master->safe_psql(
COMMIT;
]);
-my $before_restart_ts = $node_master->safe_psql('postgres',
+my $before_restart_ts = $node_primary->safe_psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid');]);
ok($before_restart_ts ne '' && $before_restart_ts ne 'null',
'commit timestamp recorded');
-$node_master->stop('immediate');
-$node_master->start;
+$node_primary->stop('immediate');
+$node_primary->start;
-my $after_crash_ts = $node_master->safe_psql('postgres',
+my $after_crash_ts = $node_primary->safe_psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid');]);
is($after_crash_ts, $before_restart_ts,
'timestamps before and after crash are equal');
-$node_master->stop('fast');
-$node_master->start;
+$node_primary->stop('fast');
+$node_primary->start;
-my $after_restart_ts = $node_master->safe_psql('postgres',
+my $after_restart_ts = $node_primary->safe_psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid');]);
is($after_restart_ts, $before_restart_ts,
'timestamps before and after restart are equal');
# Now disable commit timestamps
-$node_master->append_conf('postgresql.conf', 'track_commit_timestamp = off');
-$node_master->stop('fast');
+$node_primary->append_conf('postgresql.conf', 'track_commit_timestamp = off');
+$node_primary->stop('fast');
# Start the server, which generates a XLOG_PARAMETER_CHANGE record where
# the parameter change is registered.
-$node_master->start;
+$node_primary->start;
# Now restart again the server so as no XLOG_PARAMETER_CHANGE record are
# replayed with the follow-up immediate shutdown.
-$node_master->restart;
+$node_primary->restart;
# Move commit timestamps across page boundaries. Things should still
# be able to work across restarts with those transactions committed while
# track_commit_timestamp is disabled.
-$node_master->safe_psql(
+$node_primary->safe_psql(
'postgres',
qq(CREATE PROCEDURE consume_xid(cnt int)
AS \$\$
@@ -100,9 +100,9 @@ DECLARE
\$\$
LANGUAGE plpgsql;
));
-$node_master->safe_psql('postgres', 'CALL consume_xid(2000)');
+$node_primary->safe_psql('postgres', 'CALL consume_xid(2000)');
-($ret, $stdout, $stderr) = $node_master->psql('postgres',
+($ret, $stdout, $stderr) = $node_primary->psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid');]);
is($ret, 3, 'no commit timestamp from enable tx when cts disabled');
like(
@@ -111,7 +111,7 @@ like(
'expected error from enabled tx when committs disabled');
# Do a tx while cts disabled
-my $xid_disabled = $node_master->safe_psql(
+my $xid_disabled = $node_primary->safe_psql(
'postgres', qq[
BEGIN;
INSERT INTO committs_test(x, y) VALUES (2, current_timestamp);
@@ -120,7 +120,7 @@ my $xid_disabled = $node_master->safe_psql(
]);
# Should be inaccessible
-($ret, $stdout, $stderr) = $node_master->psql('postgres',
+($ret, $stdout, $stderr) = $node_primary->psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid_disabled');]);
is($ret, 3, 'no commit timestamp when disabled');
like(
@@ -129,21 +129,21 @@ like(
'expected error from disabled tx when committs disabled');
# Re-enable, restart and ensure we can still get the old timestamps
-$node_master->append_conf('postgresql.conf', 'track_commit_timestamp = on');
+$node_primary->append_conf('postgresql.conf', 'track_commit_timestamp = on');
# An immediate shutdown is used here. At next startup recovery will
# replay transactions which committed when track_commit_timestamp was
# disabled, and the facility should be able to work properly.
-$node_master->stop('immediate');
-$node_master->start;
+$node_primary->stop('immediate');
+$node_primary->start;
-my $after_enable_ts = $node_master->safe_psql('postgres',
+my $after_enable_ts = $node_primary->safe_psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid');]);
is($after_enable_ts, '', 'timestamp of enabled tx null after re-enable');
-my $after_enable_disabled_ts = $node_master->safe_psql('postgres',
+my $after_enable_disabled_ts = $node_primary->safe_psql('postgres',
qq[SELECT pg_xact_commit_timestamp('$xid_disabled');]);
is($after_enable_disabled_ts, '',
'timestamp of disabled tx null after re-enable');
-$node_master->stop;
+$node_primary->stop;
diff --git a/src/test/modules/test_misc/t/001_constraint_validation.pl b/src/test/modules/test_misc/t/001_constraint_validation.pl
index f762bc21c19..22497f22b01 100644
--- a/src/test/modules/test_misc/t/001_constraint_validation.pl
+++ b/src/test/modules/test_misc/t/001_constraint_validation.pl
@@ -7,7 +7,7 @@ use TestLib;
use Test::More tests => 42;
# Initialize a test cluster
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init();
# Turn message level up to DEBUG1 so that we get the messages we want to see
$node->append_conf('postgresql.conf', 'client_min_messages = DEBUG1');
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 1407359aef6..b216bbbe4bb 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1822,11 +1822,11 @@ sub run_log
Look up WAL locations on the server:
- * insert location (master only, error on replica)
- * write location (master only, error on replica)
- * flush location (master only, error on replica)
- * receive location (always undef on master)
- * replay location (always undef on master)
+ * insert location (primary only, error on replica)
+ * write location (primary only, error on replica)
+ * flush location (primary only, error on replica)
+ * receive location (always undef on primary)
+ * replay location (always undef on primary)
mode must be specified.
@@ -1876,7 +1876,7 @@ poll_query_until timeout.
Requires that the 'postgres' db exists and is accessible.
-target_lsn may be any arbitrary lsn, but is typically $master_node->lsn('insert').
+target_lsn may be any arbitrary lsn, but is typically $primary_node->lsn('insert').
If omitted, pg_current_wal_lsn() is used.
This is not a test. It die()s on failure.
@@ -1935,7 +1935,7 @@ This is not a test. It die()s on failure.
If the slot is not active, will time out after poll_query_until's timeout.
-target_lsn may be any arbitrary lsn, but is typically $master_node->lsn('insert').
+target_lsn may be any arbitrary lsn, but is typically $primary_node->lsn('insert').
Note that for logical slots, restart_lsn is held down by the oldest in-progress tx.
diff --git a/src/test/perl/README b/src/test/perl/README
index c61c3f5e942..fd9394957f7 100644
--- a/src/test/perl/README
+++ b/src/test/perl/README
@@ -48,7 +48,7 @@ Each test script should begin with:
then it will generally need to set up one or more nodes, run commands
against them and evaluate the results. For example:
- my $node = PostgresNode->get_new_node('master');
+ my $node = PostgresNode->get_new_node('primary');
$node->init;
$node->start;
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 0c316c18082..637fb9e432a 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -5,22 +5,22 @@ use PostgresNode;
use TestLib;
use Test::More tests => 35;
-# Initialize master node
-my $node_master = get_new_node('master');
+# Initialize primary node
+my $node_primary = get_new_node('primary');
# A specific role is created to perform some tests related to replication,
# and it needs proper authentication configuration.
-$node_master->init(
+$node_primary->init(
allows_streaming => 1,
auth_extra => [ '--create-role', 'repl_role' ]);
-$node_master->start;
+$node_primary->start;
my $backup_name = 'my_backup';
# Take backup
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
-# Create streaming standby linking to master
+# Create streaming standby linking to primary
my $node_standby_1 = get_new_node('standby_1');
-$node_standby_1->init_from_backup($node_master, $backup_name,
+$node_standby_1->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby_1->start;
@@ -28,10 +28,10 @@ $node_standby_1->start;
# pg_basebackup works on a standby).
$node_standby_1->backup($backup_name);
-# Take a second backup of the standby while the master is offline.
-$node_master->stop;
+# Take a second backup of the standby while the primary is offline.
+$node_primary->stop;
$node_standby_1->backup('my_backup_2');
-$node_master->start;
+$node_primary->start;
# Create second standby node linking to standby 1
my $node_standby_2 = get_new_node('standby_2');
@@ -39,13 +39,13 @@ $node_standby_2->init_from_backup($node_standby_1, $backup_name,
has_streaming => 1);
$node_standby_2->start;
-# Create some content on master and check its presence in standby 1
-$node_master->safe_psql('postgres',
+# Create some content on primary and check its presence in standby 1
+$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a");
# Wait for standbys to catch up
-$node_master->wait_for_catchup($node_standby_1, 'replay',
- $node_master->lsn('insert'));
+$node_primary->wait_for_catchup($node_standby_1, 'replay',
+ $node_primary->lsn('insert'));
$node_standby_1->wait_for_catchup($node_standby_2, 'replay',
$node_standby_1->lsn('replay'));
@@ -105,57 +105,57 @@ sub test_target_session_attrs
return;
}
-# Connect to master in "read-write" mode with master,standby1 list.
-test_target_session_attrs($node_master, $node_standby_1, $node_master,
+# Connect to primary in "read-write" mode with primary,standby1 list.
+test_target_session_attrs($node_primary, $node_standby_1, $node_primary,
"read-write", 0);
-# Connect to master in "read-write" mode with standby1,master list.
-test_target_session_attrs($node_standby_1, $node_master, $node_master,
+# Connect to primary in "read-write" mode with standby1,primary list.
+test_target_session_attrs($node_standby_1, $node_primary, $node_primary,
"read-write", 0);
-# Connect to master in "any" mode with master,standby1 list.
-test_target_session_attrs($node_master, $node_standby_1, $node_master, "any",
+# Connect to primary in "any" mode with primary,standby1 list.
+test_target_session_attrs($node_primary, $node_standby_1, $node_primary, "any",
0);
-# Connect to standby1 in "any" mode with standby1,master list.
-test_target_session_attrs($node_standby_1, $node_master, $node_standby_1,
+# Connect to standby1 in "any" mode with standby1,primary list.
+test_target_session_attrs($node_standby_1, $node_primary, $node_standby_1,
"any", 0);
# Test for SHOW commands using a WAL sender connection with a replication
# role.
note "testing SHOW commands for replication connection";
-$node_master->psql(
+$node_primary->psql(
'postgres', "
CREATE ROLE repl_role REPLICATION LOGIN;
GRANT pg_read_all_settings TO repl_role;");
-my $master_host = $node_master->host;
-my $master_port = $node_master->port;
-my $connstr_common = "host=$master_host port=$master_port user=repl_role";
+my $primary_host = $node_primary->host;
+my $primary_port = $node_primary->port;
+my $connstr_common = "host=$primary_host port=$primary_port user=repl_role";
my $connstr_rep = "$connstr_common replication=1";
my $connstr_db = "$connstr_common replication=database dbname=postgres";
# Test SHOW ALL
-my ($ret, $stdout, $stderr) = $node_master->psql(
+my ($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW ALL;',
on_error_die => 1,
extra_params => [ '-d', $connstr_rep ]);
ok($ret == 0, "SHOW ALL with replication role and physical replication");
-($ret, $stdout, $stderr) = $node_master->psql(
+($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW ALL;',
on_error_die => 1,
extra_params => [ '-d', $connstr_db ]);
ok($ret == 0, "SHOW ALL with replication role and logical replication");
# Test SHOW with a user-settable parameter
-($ret, $stdout, $stderr) = $node_master->psql(
+($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW work_mem;',
on_error_die => 1,
extra_params => [ '-d', $connstr_rep ]);
ok( $ret == 0,
"SHOW with user-settable parameter, replication role and physical replication"
);
-($ret, $stdout, $stderr) = $node_master->psql(
+($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW work_mem;',
on_error_die => 1,
extra_params => [ '-d', $connstr_db ]);
@@ -164,14 +164,14 @@ ok( $ret == 0,
);
# Test SHOW with a superuser-settable parameter
-($ret, $stdout, $stderr) = $node_master->psql(
+($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW primary_conninfo;',
on_error_die => 1,
extra_params => [ '-d', $connstr_rep ]);
ok( $ret == 0,
"SHOW with superuser-settable parameter, replication role and physical replication"
);
-($ret, $stdout, $stderr) = $node_master->psql(
+($ret, $stdout, $stderr) = $node_primary->psql(
'postgres', 'SHOW primary_conninfo;',
on_error_die => 1,
extra_params => [ '-d', $connstr_db ]);
@@ -186,13 +186,13 @@ note "switching to physical replication slot";
# standbys. Since we're going to be testing things that affect the slot state,
# also increase the standby feedback interval to ensure timely updates.
my ($slotname_1, $slotname_2) = ('standby_1', 'standby_2');
-$node_master->append_conf('postgresql.conf', "max_replication_slots = 4");
-$node_master->restart;
-is( $node_master->psql(
+$node_primary->append_conf('postgresql.conf', "max_replication_slots = 4");
+$node_primary->restart;
+is( $node_primary->psql(
'postgres',
qq[SELECT pg_create_physical_replication_slot('$slotname_1');]),
0,
- 'physical slot created on master');
+ 'physical slot created on primary');
$node_standby_1->append_conf('postgresql.conf',
"primary_slot_name = $slotname_1");
$node_standby_1->append_conf('postgresql.conf',
@@ -231,7 +231,7 @@ sub get_slot_xmins
# There's no hot standby feedback and there are no logical slots on either peer
# so xmin and catalog_xmin should be null on both slots.
-my ($xmin, $catalog_xmin) = get_slot_xmins($node_master, $slotname_1,
+my ($xmin, $catalog_xmin) = get_slot_xmins($node_primary, $slotname_1,
"xmin IS NULL AND catalog_xmin IS NULL");
is($xmin, '', 'xmin of non-cascaded slot null with no hs_feedback');
is($catalog_xmin, '',
@@ -244,20 +244,20 @@ is($catalog_xmin, '',
'catalog xmin of cascaded slot null with no hs_feedback');
# Replication still works?
-$node_master->safe_psql('postgres', 'CREATE TABLE replayed(val integer);');
+$node_primary->safe_psql('postgres', 'CREATE TABLE replayed(val integer);');
sub replay_check
{
- my $newval = $node_master->safe_psql('postgres',
+ my $newval = $node_primary->safe_psql('postgres',
'INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val'
);
- $node_master->wait_for_catchup($node_standby_1, 'replay',
- $node_master->lsn('insert'));
+ $node_primary->wait_for_catchup($node_standby_1, 'replay',
+ $node_primary->lsn('insert'));
$node_standby_1->wait_for_catchup($node_standby_2, 'replay',
$node_standby_1->lsn('replay'));
$node_standby_1->safe_psql('postgres',
qq[SELECT 1 FROM replayed WHERE val = $newval])
- or die "standby_1 didn't replay master value $newval";
+ or die "standby_1 didn't replay primary value $newval";
$node_standby_2->safe_psql('postgres',
qq[SELECT 1 FROM replayed WHERE val = $newval])
or die "standby_2 didn't replay standby_1 value $newval";
@@ -278,7 +278,7 @@ $node_standby_2->safe_psql('postgres',
$node_standby_2->reload;
replay_check();
-($xmin, $catalog_xmin) = get_slot_xmins($node_master, $slotname_1,
+($xmin, $catalog_xmin) = get_slot_xmins($node_primary, $slotname_1,
"xmin IS NOT NULL AND catalog_xmin IS NULL");
isnt($xmin, '', 'xmin of non-cascaded slot non-null with hs feedback');
is($catalog_xmin, '',
@@ -291,7 +291,7 @@ is($catalog_xmin1, '',
'catalog xmin of cascaded slot still null with hs_feedback');
note "doing some work to advance xmin";
-$node_master->safe_psql(
+$node_primary->safe_psql(
'postgres', q{
do $$
begin
@@ -306,12 +306,12 @@ begin
end$$;
});
-$node_master->safe_psql('postgres', 'VACUUM;');
-$node_master->safe_psql('postgres', 'CHECKPOINT;');
+$node_primary->safe_psql('postgres', 'VACUUM;');
+$node_primary->safe_psql('postgres', 'CHECKPOINT;');
my ($xmin2, $catalog_xmin2) =
- get_slot_xmins($node_master, $slotname_1, "xmin <> '$xmin'");
-note "master slot's new xmin $xmin2, old xmin $xmin";
+ get_slot_xmins($node_primary, $slotname_1, "xmin <> '$xmin'");
+note "primary slot's new xmin $xmin2, old xmin $xmin";
isnt($xmin2, $xmin, 'xmin of non-cascaded slot with hs feedback has changed');
is($catalog_xmin2, '',
'catalog xmin of non-cascaded slot still null with hs_feedback unchanged'
@@ -335,7 +335,7 @@ $node_standby_2->safe_psql('postgres',
$node_standby_2->reload;
replay_check();
-($xmin, $catalog_xmin) = get_slot_xmins($node_master, $slotname_1,
+($xmin, $catalog_xmin) = get_slot_xmins($node_primary, $slotname_1,
"xmin IS NULL AND catalog_xmin IS NULL");
is($xmin, '', 'xmin of non-cascaded slot null with hs feedback reset');
is($catalog_xmin, '',
@@ -349,44 +349,44 @@ is($catalog_xmin, '',
note "check change primary_conninfo without restart";
$node_standby_2->append_conf('postgresql.conf', "primary_slot_name = ''");
-$node_standby_2->enable_streaming($node_master);
+$node_standby_2->enable_streaming($node_primary);
$node_standby_2->reload;
# be sure do not streaming from cascade
$node_standby_1->stop;
-my $newval = $node_master->safe_psql('postgres',
+my $newval = $node_primary->safe_psql('postgres',
'INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val'
);
-$node_master->wait_for_catchup($node_standby_2, 'replay',
- $node_master->lsn('insert'));
+$node_primary->wait_for_catchup($node_standby_2, 'replay',
+ $node_primary->lsn('insert'));
my $is_replayed = $node_standby_2->safe_psql('postgres',
qq[SELECT 1 FROM replayed WHERE val = $newval]);
-is($is_replayed, qq(1), "standby_2 didn't replay master value $newval");
+is($is_replayed, qq(1), "standby_2 didn't replay primary value $newval");
# Test physical slot advancing and its durability. Create a new slot on
# the primary, not used by any of the standbys. This reserves WAL at creation.
my $phys_slot = 'phys_slot';
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"SELECT pg_create_physical_replication_slot('$phys_slot', true);");
-$node_master->psql(
+$node_primary->psql(
'postgres', "
CREATE TABLE tab_phys_slot (a int);
INSERT INTO tab_phys_slot VALUES (generate_series(1,10));");
my $current_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
chomp($current_lsn);
-my $psql_rc = $node_master->psql('postgres',
+my $psql_rc = $node_primary->psql('postgres',
"SELECT pg_replication_slot_advance('$phys_slot', '$current_lsn'::pg_lsn);"
);
is($psql_rc, '0', 'slot advancing with physical slot');
-my $phys_restart_lsn_pre = $node_master->safe_psql('postgres',
+my $phys_restart_lsn_pre = $node_primary->safe_psql('postgres',
"SELECT restart_lsn from pg_replication_slots WHERE slot_name = '$phys_slot';"
);
chomp($phys_restart_lsn_pre);
# Slot advance should persist across clean restarts.
-$node_master->restart;
-my $phys_restart_lsn_post = $node_master->safe_psql('postgres',
+$node_primary->restart;
+my $phys_restart_lsn_post = $node_primary->safe_psql('postgres',
"SELECT restart_lsn from pg_replication_slots WHERE slot_name = '$phys_slot';"
);
chomp($phys_restart_lsn_post);
diff --git a/src/test/recovery/t/002_archiving.pl b/src/test/recovery/t/002_archiving.pl
index 683c33b5100..cf8988f62a7 100644
--- a/src/test/recovery/t/002_archiving.pl
+++ b/src/test/recovery/t/002_archiving.pl
@@ -6,38 +6,38 @@ use TestLib;
use Test::More tests => 3;
use File::Copy;
-# Initialize master node, doing archives
-my $node_master = get_new_node('master');
-$node_master->init(
+# Initialize primary node, doing archives
+my $node_primary = get_new_node('primary');
+$node_primary->init(
has_archiving => 1,
allows_streaming => 1);
my $backup_name = 'my_backup';
# Start it
-$node_master->start;
+$node_primary->start;
# Take backup for standby
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
# Initialize standby node from backup, fetching WAL from archives
my $node_standby = get_new_node('standby');
-$node_standby->init_from_backup($node_master, $backup_name,
+$node_standby->init_from_backup($node_primary, $backup_name,
has_restoring => 1);
$node_standby->append_conf('postgresql.conf',
"wal_retrieve_retry_interval = '100ms'");
$node_standby->start;
-# Create some content on master
-$node_master->safe_psql('postgres',
+# Create some content on primary
+$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
my $current_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
-# Force archiving of WAL file to make it present on master
-$node_master->safe_psql('postgres', "SELECT pg_switch_wal()");
+# Force archiving of WAL file to make it present on primary
+$node_primary->safe_psql('postgres', "SELECT pg_switch_wal()");
# Add some more content, it should not be present on standby
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(1001,2000))");
# Wait until necessary replay has been done on standby
@@ -60,7 +60,7 @@ is($result, qq(1000), 'check content from archives');
$node_standby->promote;
my $node_standby2 = get_new_node('standby2');
-$node_standby2->init_from_backup($node_master, $backup_name,
+$node_standby2->init_from_backup($node_primary, $backup_name,
has_restoring => 1);
$node_standby2->start;
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index 8d114eb7ad5..cc701c5539e 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -13,13 +13,13 @@ sub test_recovery_standby
{
my $test_name = shift;
my $node_name = shift;
- my $node_master = shift;
+ my $node_primary = shift;
my $recovery_params = shift;
my $num_rows = shift;
my $until_lsn = shift;
my $node_standby = get_new_node($node_name);
- $node_standby->init_from_backup($node_master, 'my_backup',
+ $node_standby->init_from_backup($node_primary, 'my_backup',
has_restoring => 1);
foreach my $param_item (@$recovery_params)
@@ -35,7 +35,7 @@ sub test_recovery_standby
$node_standby->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby to catch up";
- # Create some content on master and check its presence in standby
+ # Create some content on primary and check its presence in standby
my $result =
$node_standby->safe_psql('postgres', "SELECT count(*) FROM tab_int");
is($result, qq($num_rows), "check standby content for $test_name");
@@ -46,74 +46,74 @@ sub test_recovery_standby
return;
}
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(has_archiving => 1, allows_streaming => 1);
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(has_archiving => 1, allows_streaming => 1);
# Start it
-$node_master->start;
+$node_primary->start;
# Create data before taking the backup, aimed at testing
# recovery_target = 'immediate'
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
my $lsn1 =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
# Take backup from which all operations will be run
-$node_master->backup('my_backup');
+$node_primary->backup('my_backup');
# Insert some data with used as a replay reference, with a recovery
# target TXID.
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(1001,2000))");
-my $ret = $node_master->safe_psql('postgres',
+my $ret = $node_primary->safe_psql('postgres',
"SELECT pg_current_wal_lsn(), pg_current_xact_id();");
my ($lsn2, $recovery_txid) = split /\|/, $ret;
# More data, with recovery target timestamp
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(2001,3000))");
my $lsn3 =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
-my $recovery_time = $node_master->safe_psql('postgres', "SELECT now()");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+my $recovery_time = $node_primary->safe_psql('postgres', "SELECT now()");
# Even more data, this time with a recovery target name
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(3001,4000))");
my $recovery_name = "my_target";
my $lsn4 =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
-$node_master->safe_psql('postgres',
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+$node_primary->safe_psql('postgres',
"SELECT pg_create_restore_point('$recovery_name');");
# And now for a recovery target LSN
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(4001,5000))");
my $lsn5 = my $recovery_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(5001,6000))");
# Force archiving of WAL file
-$node_master->safe_psql('postgres', "SELECT pg_switch_wal()");
+$node_primary->safe_psql('postgres', "SELECT pg_switch_wal()");
# Test recovery targets
my @recovery_params = ("recovery_target = 'immediate'");
test_recovery_standby('immediate target',
- 'standby_1', $node_master, \@recovery_params, "1000", $lsn1);
+ 'standby_1', $node_primary, \@recovery_params, "1000", $lsn1);
@recovery_params = ("recovery_target_xid = '$recovery_txid'");
-test_recovery_standby('XID', 'standby_2', $node_master, \@recovery_params,
+test_recovery_standby('XID', 'standby_2', $node_primary, \@recovery_params,
"2000", $lsn2);
@recovery_params = ("recovery_target_time = '$recovery_time'");
-test_recovery_standby('time', 'standby_3', $node_master, \@recovery_params,
+test_recovery_standby('time', 'standby_3', $node_primary, \@recovery_params,
"3000", $lsn3);
@recovery_params = ("recovery_target_name = '$recovery_name'");
-test_recovery_standby('name', 'standby_4', $node_master, \@recovery_params,
+test_recovery_standby('name', 'standby_4', $node_primary, \@recovery_params,
"4000", $lsn4);
@recovery_params = ("recovery_target_lsn = '$recovery_lsn'");
-test_recovery_standby('LSN', 'standby_5', $node_master, \@recovery_params,
+test_recovery_standby('LSN', 'standby_5', $node_primary, \@recovery_params,
"5000", $lsn5);
# Multiple targets
@@ -127,10 +127,10 @@ test_recovery_standby('LSN', 'standby_5', $node_master, \@recovery_params,
"recovery_target_name = ''",
"recovery_target_time = '$recovery_time'");
test_recovery_standby('multiple overriding settings',
- 'standby_6', $node_master, \@recovery_params, "3000", $lsn3);
+ 'standby_6', $node_primary, \@recovery_params, "3000", $lsn3);
my $node_standby = get_new_node('standby_7');
-$node_standby->init_from_backup($node_master, 'my_backup',
+$node_standby->init_from_backup($node_primary, 'my_backup',
has_restoring => 1);
$node_standby->append_conf(
'postgresql.conf', "recovery_target_name = '$recovery_name'
@@ -151,7 +151,7 @@ ok($logfile =~ qr/multiple recovery targets specified/,
$node_standby = get_new_node('standby_8');
$node_standby->init_from_backup(
- $node_master, 'my_backup',
+ $node_primary, 'my_backup',
has_restoring => 1,
standby => 0);
$node_standby->append_conf('postgresql.conf',
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index 7e952d36676..1ecdb0eba0d 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -10,35 +10,35 @@ use Test::More tests => 2;
$ENV{PGDATABASE} = 'postgres';
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->start;
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->start;
# Take backup
my $backup_name = 'my_backup';
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
# Create two standbys linking to it
my $node_standby_1 = get_new_node('standby_1');
-$node_standby_1->init_from_backup($node_master, $backup_name,
+$node_standby_1->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby_1->start;
my $node_standby_2 = get_new_node('standby_2');
-$node_standby_2->init_from_backup($node_master, $backup_name,
+$node_standby_2->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby_2->start;
-# Create some content on master
-$node_master->safe_psql('postgres',
+# Create some content on primary
+$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a");
# Wait until standby has replayed enough data on standby 1
-$node_master->wait_for_catchup($node_standby_1, 'replay',
- $node_master->lsn('write'));
+$node_primary->wait_for_catchup($node_standby_1, 'replay',
+ $node_primary->lsn('write'));
-# Stop and remove master
-$node_master->teardown_node;
+# Stop and remove primary
+$node_primary->teardown_node;
# promote standby 1 using "pg_promote", switching it to a new timeline
my $psql_out = '';
diff --git a/src/test/recovery/t/005_replay_delay.pl b/src/test/recovery/t/005_replay_delay.pl
index 6c85c928c10..459772f6c44 100644
--- a/src/test/recovery/t/005_replay_delay.pl
+++ b/src/test/recovery/t/005_replay_delay.pl
@@ -6,23 +6,23 @@ use PostgresNode;
use TestLib;
use Test::More tests => 1;
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->start;
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->start;
# And some content
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"CREATE TABLE tab_int AS SELECT generate_series(1, 10) AS a");
# Take backup
my $backup_name = 'my_backup';
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
# Create streaming standby from backup
my $node_standby = get_new_node('standby');
my $delay = 3;
-$node_standby->init_from_backup($node_master, $backup_name,
+$node_standby->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby->append_conf(
'postgresql.conf', qq(
@@ -30,19 +30,19 @@ recovery_min_apply_delay = '${delay}s'
));
$node_standby->start;
-# Make new content on master and check its presence in standby depending
+# Make new content on primary and check its presence in standby depending
# on the delay applied above. Before doing the insertion, get the
# current timestamp that will be used as a comparison base. Even on slow
# machines, this allows to have a predictable behavior when comparing the
-# delay between data insertion moment on master and replay time on standby.
-my $master_insert_time = time();
-$node_master->safe_psql('postgres',
+# delay between data insertion moment on primary and replay time on standby.
+my $primary_insert_time = time();
+$node_primary->safe_psql('postgres',
"INSERT INTO tab_int VALUES (generate_series(11, 20))");
# Now wait for replay to complete on standby. We're done waiting when the
-# standby has replayed up to the previously saved master LSN.
+# standby has replayed up to the previously saved primary LSN.
my $until_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
$node_standby->poll_query_until('postgres',
"SELECT (pg_last_wal_replay_lsn() - '$until_lsn'::pg_lsn) >= 0")
@@ -50,5 +50,5 @@ $node_standby->poll_query_until('postgres',
# This test is successful if and only if the LSN has been applied with at least
# the configured apply delay.
-ok(time() - $master_insert_time >= $delay,
+ok(time() - $primary_insert_time >= $delay,
"standby applies WAL only after replication delay");
diff --git a/src/test/recovery/t/006_logical_decoding.pl b/src/test/recovery/t/006_logical_decoding.pl
index 78229a7b92b..8cdfae1e1e2 100644
--- a/src/test/recovery/t/006_logical_decoding.pl
+++ b/src/test/recovery/t/006_logical_decoding.pl
@@ -10,25 +10,25 @@ use TestLib;
use Test::More tests => 14;
use Config;
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->append_conf(
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->append_conf(
'postgresql.conf', qq(
wal_level = logical
));
-$node_master->start;
-my $backup_name = 'master_backup';
+$node_primary->start;
+my $backup_name = 'primary_backup';
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[CREATE TABLE decoding_test(x integer, y text);]);
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');]
);
# Cover walsender error shutdown code
-my ($result, $stdout, $stderr) = $node_master->psql(
+my ($result, $stdout, $stderr) = $node_primary->psql(
'template1',
qq[START_REPLICATION SLOT test_slot LOGICAL 0/0],
replication => 'database');
@@ -38,19 +38,19 @@ ok( $stderr =~
# Check case of walsender not using a database connection. Logical
# decoding should not be allowed.
-($result, $stdout, $stderr) = $node_master->psql(
+($result, $stdout, $stderr) = $node_primary->psql(
'template1',
qq[START_REPLICATION SLOT s1 LOGICAL 0/1],
replication => 'true');
ok($stderr =~ /ERROR: logical decoding requires a database connection/,
"Logical decoding fails on non-database connection");
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,10) s;]
);
# Basic decoding works
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
qq[SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);]);
is(scalar(my @foobar = split /^/m, $result),
12, 'Decoding produced 12 rows inc BEGIN/COMMIT');
@@ -58,17 +58,17 @@ is(scalar(my @foobar = split /^/m, $result),
# If we immediately crash the server we might lose the progress we just made
# and replay the same changes again. But a clean shutdown should never repeat
# the same changes when we use the SQL decoding interface.
-$node_master->restart('fast');
+$node_primary->restart('fast');
# There are no new writes, so the result should be empty.
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
qq[SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);]);
chomp($result);
is($result, '', 'Decoding after fast restart repeats no rows');
# Insert some rows and verify that we get the same results from pg_recvlogical
# and the SQL interface.
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,4) s;]
);
@@ -79,22 +79,22 @@ table public.decoding_test: INSERT: x[integer]:3 y[text]:'3'
table public.decoding_test: INSERT: x[integer]:4 y[text]:'4'
COMMIT};
-my $stdout_sql = $node_master->safe_psql('postgres',
+my $stdout_sql = $node_primary->safe_psql('postgres',
qq[SELECT data FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');]
);
is($stdout_sql, $expected, 'got expected output from SQL decoding session');
-my $endpos = $node_master->safe_psql('postgres',
+my $endpos = $node_primary->safe_psql('postgres',
"SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;"
);
print "waiting to replay $endpos\n";
# Insert some rows after $endpos, which we won't read.
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
qq[INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(5,50) s;]
);
-my $stdout_recv = $node_master->pg_recvlogical_upto(
+my $stdout_recv = $node_primary->pg_recvlogical_upto(
'postgres', 'test_slot', $endpos, 180,
'include-xids' => '0',
'skip-empty-xacts' => '1');
@@ -102,27 +102,27 @@ chomp($stdout_recv);
is($stdout_recv, $expected,
'got same expected output from pg_recvlogical decoding session');
-$node_master->poll_query_until('postgres',
+$node_primary->poll_query_until('postgres',
"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'test_slot' AND active_pid IS NULL)"
) or die "slot never became inactive";
-$stdout_recv = $node_master->pg_recvlogical_upto(
+$stdout_recv = $node_primary->pg_recvlogical_upto(
'postgres', 'test_slot', $endpos, 180,
'include-xids' => '0',
'skip-empty-xacts' => '1');
chomp($stdout_recv);
is($stdout_recv, '', 'pg_recvlogical acknowledged changes');
-$node_master->safe_psql('postgres', 'CREATE DATABASE otherdb');
+$node_primary->safe_psql('postgres', 'CREATE DATABASE otherdb');
-is( $node_master->psql(
+is( $node_primary->psql(
'otherdb',
"SELECT lsn FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;"
),
3,
'replaying logical slot from another database fails');
-$node_master->safe_psql('otherdb',
+$node_primary->safe_psql('otherdb',
qq[SELECT pg_create_logical_replication_slot('otherdb_slot', 'test_decoding');]
);
@@ -135,51 +135,51 @@ SKIP:
my $pg_recvlogical = IPC::Run::start(
[
- 'pg_recvlogical', '-d', $node_master->connstr('otherdb'),
+ 'pg_recvlogical', '-d', $node_primary->connstr('otherdb'),
'-S', 'otherdb_slot', '-f', '-', '--start'
]);
- $node_master->poll_query_until('otherdb',
+ $node_primary->poll_query_until('otherdb',
"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NOT NULL)"
) or die "slot never became active";
- is($node_master->psql('postgres', 'DROP DATABASE otherdb'),
+ is($node_primary->psql('postgres', 'DROP DATABASE otherdb'),
3, 'dropping a DB with active logical slots fails');
$pg_recvlogical->kill_kill;
- is($node_master->slot('otherdb_slot')->{'slot_name'},
+ is($node_primary->slot('otherdb_slot')->{'slot_name'},
undef, 'logical slot still exists');
}
-$node_master->poll_query_until('otherdb',
+$node_primary->poll_query_until('otherdb',
"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = 'otherdb_slot' AND active_pid IS NULL)"
) or die "slot never became inactive";
-is($node_master->psql('postgres', 'DROP DATABASE otherdb'),
+is($node_primary->psql('postgres', 'DROP DATABASE otherdb'),
0, 'dropping a DB with inactive logical slots succeeds');
-is($node_master->slot('otherdb_slot')->{'slot_name'},
+is($node_primary->slot('otherdb_slot')->{'slot_name'},
undef, 'logical slot was actually dropped with DB');
# Test logical slot advancing and its durability.
my $logical_slot = 'logical_slot';
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"SELECT pg_create_logical_replication_slot('$logical_slot', 'test_decoding', false);"
);
-$node_master->psql(
+$node_primary->psql(
'postgres', "
CREATE TABLE tab_logical_slot (a int);
INSERT INTO tab_logical_slot VALUES (generate_series(1,10));");
my $current_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
chomp($current_lsn);
-my $psql_rc = $node_master->psql('postgres',
+my $psql_rc = $node_primary->psql('postgres',
"SELECT pg_replication_slot_advance('$logical_slot', '$current_lsn'::pg_lsn);"
);
is($psql_rc, '0', 'slot advancing with logical slot');
-my $logical_restart_lsn_pre = $node_master->safe_psql('postgres',
+my $logical_restart_lsn_pre = $node_primary->safe_psql('postgres',
"SELECT restart_lsn from pg_replication_slots WHERE slot_name = '$logical_slot';"
);
chomp($logical_restart_lsn_pre);
# Slot advance should persist across clean restarts.
-$node_master->restart;
-my $logical_restart_lsn_post = $node_master->safe_psql('postgres',
+$node_primary->restart;
+my $logical_restart_lsn_post = $node_primary->safe_psql('postgres',
"SELECT restart_lsn from pg_replication_slots WHERE slot_name = '$logical_slot';"
);
chomp($logical_restart_lsn_post);
@@ -187,4 +187,4 @@ ok(($logical_restart_lsn_pre cmp $logical_restart_lsn_post) == 0,
"logical slot advance persists across restarts");
# done with the node
-$node_master->stop;
+$node_primary->stop;
diff --git a/src/test/recovery/t/007_sync_rep.pl b/src/test/recovery/t/007_sync_rep.pl
index 05803bed4e3..e3c6738d3ab 100644
--- a/src/test/recovery/t/007_sync_rep.pl
+++ b/src/test/recovery/t/007_sync_rep.pl
@@ -32,53 +32,53 @@ sub test_sync_state
# until the standby is confirmed as registered.
sub start_standby_and_wait
{
- my ($master, $standby) = @_;
- my $master_name = $master->name;
+ my ($primary, $standby) = @_;
+ my $primary_name = $primary->name;
my $standby_name = $standby->name;
my $query =
"SELECT count(1) = 1 FROM pg_stat_replication WHERE application_name = '$standby_name'";
$standby->start;
- print("### Waiting for standby \"$standby_name\" on \"$master_name\"\n");
- $master->poll_query_until('postgres', $query);
+ print("### Waiting for standby \"$standby_name\" on \"$primary_name\"\n");
+ $primary->poll_query_until('postgres', $query);
return;
}
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
-$node_master->start;
-my $backup_name = 'master_backup';
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
+$node_primary->start;
+my $backup_name = 'primary_backup';
# Take backup
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
# Create all the standbys. Their status on the primary is checked to ensure
# the ordering of each one of them in the WAL sender array of the primary.
-# Create standby1 linking to master
+# Create standby1 linking to primary
my $node_standby_1 = get_new_node('standby1');
-$node_standby_1->init_from_backup($node_master, $backup_name,
+$node_standby_1->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
-start_standby_and_wait($node_master, $node_standby_1);
+start_standby_and_wait($node_primary, $node_standby_1);
-# Create standby2 linking to master
+# Create standby2 linking to primary
my $node_standby_2 = get_new_node('standby2');
-$node_standby_2->init_from_backup($node_master, $backup_name,
+$node_standby_2->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
-start_standby_and_wait($node_master, $node_standby_2);
+start_standby_and_wait($node_primary, $node_standby_2);
-# Create standby3 linking to master
+# Create standby3 linking to primary
my $node_standby_3 = get_new_node('standby3');
-$node_standby_3->init_from_backup($node_master, $backup_name,
+$node_standby_3->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
-start_standby_and_wait($node_master, $node_standby_3);
+start_standby_and_wait($node_primary, $node_standby_3);
# Check that sync_state is determined correctly when
# synchronous_standby_names is specified in old syntax.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|2|potential
standby3|0|async),
'old syntax of synchronous_standby_names',
@@ -90,7 +90,7 @@ standby3|0|async),
# it's stored in the head of WalSnd array which manages
# all the standbys though they have the same priority.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|1|potential
standby3|1|potential),
'asterisk in synchronous_standby_names',
@@ -105,23 +105,23 @@ $node_standby_3->stop;
# Make sure that each standby reports back to the primary in the wanted
# order.
-start_standby_and_wait($node_master, $node_standby_2);
-start_standby_and_wait($node_master, $node_standby_3);
+start_standby_and_wait($node_primary, $node_standby_2);
+start_standby_and_wait($node_primary, $node_standby_3);
# Specify 2 as the number of sync standbys.
# Check that two standbys are in 'sync' state.
test_sync_state(
- $node_master, qq(standby2|2|sync
+ $node_primary, qq(standby2|2|sync
standby3|3|sync),
'2 synchronous standbys',
'2(standby1,standby2,standby3)');
# Start standby1
-start_standby_and_wait($node_master, $node_standby_1);
+start_standby_and_wait($node_primary, $node_standby_1);
-# Create standby4 linking to master
+# Create standby4 linking to primary
my $node_standby_4 = get_new_node('standby4');
-$node_standby_4->init_from_backup($node_master, $backup_name,
+$node_standby_4->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby_4->start;
@@ -130,7 +130,7 @@ $node_standby_4->start;
# standby3 appearing later represents potential, and standby4 is
# in 'async' state because it's not in the list.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|2|sync
standby3|3|potential
standby4|0|async),
@@ -140,7 +140,7 @@ standby4|0|async),
# when num_sync exceeds the number of names of potential sync standbys
# specified in synchronous_standby_names.
test_sync_state(
- $node_master, qq(standby1|0|async
+ $node_primary, qq(standby1|0|async
standby2|4|sync
standby3|3|sync
standby4|1|sync),
@@ -154,7 +154,7 @@ standby4|1|sync),
# second standby listed first in the WAL sender array, which is
# standby2 in this case.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|2|sync
standby3|2|potential
standby4|2|potential),
@@ -164,7 +164,7 @@ standby4|2|potential),
# Check that the setting of '2(*)' chooses standby2 and standby3 that are stored
# earlier in WalSnd array as sync standbys.
test_sync_state(
- $node_master, qq(standby1|1|potential
+ $node_primary, qq(standby1|1|potential
standby2|1|sync
standby3|1|sync
standby4|1|potential),
@@ -177,7 +177,7 @@ $node_standby_3->stop;
# Check that the state of standby1 stored earlier in WalSnd array than
# standby4 is transited from potential to sync.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|1|sync
standby4|1|potential),
'potential standby found earlier in array is promoted to sync');
@@ -185,7 +185,7 @@ standby4|1|potential),
# Check that standby1 and standby2 are chosen as sync standbys
# based on their priorities.
test_sync_state(
- $node_master, qq(standby1|1|sync
+ $node_primary, qq(standby1|1|sync
standby2|2|sync
standby4|0|async),
'priority-based sync replication specified by FIRST keyword',
@@ -194,7 +194,7 @@ standby4|0|async),
# Check that all the listed standbys are considered as candidates
# for sync standbys in a quorum-based sync replication.
test_sync_state(
- $node_master, qq(standby1|1|quorum
+ $node_primary, qq(standby1|1|quorum
standby2|1|quorum
standby4|0|async),
'2 quorum and 1 async',
@@ -206,7 +206,7 @@ $node_standby_3->start;
# Check that the setting of 'ANY 2(*)' chooses all standbys as
# candidates for quorum sync standbys.
test_sync_state(
- $node_master, qq(standby1|1|quorum
+ $node_primary, qq(standby1|1|quorum
standby2|1|quorum
standby3|1|quorum
standby4|1|quorum),
diff --git a/src/test/recovery/t/008_fsm_truncation.pl b/src/test/recovery/t/008_fsm_truncation.pl
index ddab464a973..37967c11744 100644
--- a/src/test/recovery/t/008_fsm_truncation.pl
+++ b/src/test/recovery/t/008_fsm_truncation.pl
@@ -9,10 +9,10 @@ use PostgresNode;
use TestLib;
use Test::More tests => 1;
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1);
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1);
-$node_master->append_conf(
+$node_primary->append_conf(
'postgresql.conf', qq{
fsync = on
wal_log_hints = on
@@ -20,17 +20,17 @@ max_prepared_transactions = 5
autovacuum = off
});
-# Create a master node and its standby, initializing both with some data
+# Create a primary node and its standby, initializing both with some data
# at the same time.
-$node_master->start;
+$node_primary->start;
-$node_master->backup('master_backup');
+$node_primary->backup('primary_backup');
my $node_standby = get_new_node('standby');
-$node_standby->init_from_backup($node_master, 'master_backup',
+$node_standby->init_from_backup($node_primary, 'primary_backup',
has_streaming => 1);
$node_standby->start;
-$node_master->psql(
+$node_primary->psql(
'postgres', qq{
create table testtab (a int, b char(100));
insert into testtab select generate_series(1,1000), 'foo';
@@ -39,7 +39,7 @@ delete from testtab where ctid > '(8,0)';
});
# Take a lock on the table to prevent following vacuum from truncating it
-$node_master->psql(
+$node_primary->psql(
'postgres', qq{
begin;
lock table testtab in row share mode;
@@ -47,14 +47,14 @@ prepare transaction 'p1';
});
# Vacuum, update FSM without truncation
-$node_master->psql('postgres', 'vacuum verbose testtab');
+$node_primary->psql('postgres', 'vacuum verbose testtab');
# Force a checkpoint
-$node_master->psql('postgres', 'checkpoint');
+$node_primary->psql('postgres', 'checkpoint');
# Now do some more insert/deletes, another vacuum to ensure full-page writes
# are done
-$node_master->psql(
+$node_primary->psql(
'postgres', qq{
insert into testtab select generate_series(1,1000), 'foo';
delete from testtab where ctid > '(8,0)';
@@ -65,15 +65,15 @@ vacuum verbose testtab;
$node_standby->psql('postgres', 'checkpoint');
# Release the lock, vacuum again which should lead to truncation
-$node_master->psql(
+$node_primary->psql(
'postgres', qq{
rollback prepared 'p1';
vacuum verbose testtab;
});
-$node_master->psql('postgres', 'checkpoint');
+$node_primary->psql('postgres', 'checkpoint');
my $until_lsn =
- $node_master->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
+ $node_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn();");
# Wait long enough for standby to receive and apply all WAL
my $caughtup_query =
diff --git a/src/test/recovery/t/009_twophase.pl b/src/test/recovery/t/009_twophase.pl
index 1b748ad857b..9da3464bc1d 100644
--- a/src/test/recovery/t/009_twophase.pl
+++ b/src/test/recovery/t/009_twophase.pl
@@ -23,7 +23,7 @@ sub configure_and_reload
return;
}
-# Set up two nodes, which will alternately be master and replication standby.
+# Set up two nodes, which will alternately be primary and replication standby.
# Setup london node
my $node_london = get_new_node("london");
@@ -46,13 +46,13 @@ $node_paris->start;
configure_and_reload($node_london, "synchronous_standby_names = 'paris'");
configure_and_reload($node_paris, "synchronous_standby_names = 'london'");
-# Set up nonce names for current master and standby nodes
-note "Initially, london is master and paris is standby";
-my ($cur_master, $cur_standby) = ($node_london, $node_paris);
-my $cur_master_name = $cur_master->name;
+# Set up nonce names for current primary and standby nodes
+note "Initially, london is primary and paris is standby";
+my ($cur_primary, $cur_standby) = ($node_london, $node_paris);
+my $cur_primary_name = $cur_primary->name;
# Create table we'll use in the test transactions
-$cur_master->psql('postgres', "CREATE TABLE t_009_tbl (id int, msg text)");
+$cur_primary->psql('postgres', "CREATE TABLE t_009_tbl (id int, msg text)");
###############################################################################
# Check that we can commit and abort transaction after soft restart.
@@ -61,25 +61,25 @@ $cur_master->psql('postgres', "CREATE TABLE t_009_tbl (id int, msg text)");
# files.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (1, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (1, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (2, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (2, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_1';
BEGIN;
- INSERT INTO t_009_tbl VALUES (3, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (3, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (4, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (4, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_2';");
-$cur_master->stop;
-$cur_master->start;
+$cur_primary->stop;
+$cur_primary->start;
-$psql_rc = $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_1'");
+$psql_rc = $cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_1'");
is($psql_rc, '0', 'Commit prepared transaction after restart');
-$psql_rc = $cur_master->psql('postgres', "ROLLBACK PREPARED 'xact_009_2'");
+$psql_rc = $cur_primary->psql('postgres', "ROLLBACK PREPARED 'xact_009_2'");
is($psql_rc, '0', 'Rollback prepared transaction after restart');
###############################################################################
@@ -88,50 +88,50 @@ is($psql_rc, '0', 'Rollback prepared transaction after restart');
# transaction using dedicated WAL records.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
CHECKPOINT;
BEGIN;
- INSERT INTO t_009_tbl VALUES (5, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (5, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (6, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (6, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_3';
BEGIN;
- INSERT INTO t_009_tbl VALUES (7, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (7, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (8, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (8, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_4';");
-$cur_master->teardown_node;
-$cur_master->start;
+$cur_primary->teardown_node;
+$cur_primary->start;
-$psql_rc = $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_3'");
+$psql_rc = $cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_3'");
is($psql_rc, '0', 'Commit prepared transaction after teardown');
-$psql_rc = $cur_master->psql('postgres', "ROLLBACK PREPARED 'xact_009_4'");
+$psql_rc = $cur_primary->psql('postgres', "ROLLBACK PREPARED 'xact_009_4'");
is($psql_rc, '0', 'Rollback prepared transaction after teardown');
###############################################################################
# Check that WAL replay can handle several transactions with same GID name.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
CHECKPOINT;
BEGIN;
- INSERT INTO t_009_tbl VALUES (9, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (9, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (10, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (10, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_5';
COMMIT PREPARED 'xact_009_5';
BEGIN;
- INSERT INTO t_009_tbl VALUES (11, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (11, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (12, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (12, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_5';");
-$cur_master->teardown_node;
-$cur_master->start;
+$cur_primary->teardown_node;
+$cur_primary->start;
-$psql_rc = $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_5'");
+$psql_rc = $cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_5'");
is($psql_rc, '0', 'Replay several transactions with same GID');
###############################################################################
@@ -139,39 +139,39 @@ is($psql_rc, '0', 'Replay several transactions with same GID');
# while replaying transaction commits.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (13, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (13, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (14, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (14, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_6';
COMMIT PREPARED 'xact_009_6';");
-$cur_master->teardown_node;
-$cur_master->start;
-$psql_rc = $cur_master->psql(
+$cur_primary->teardown_node;
+$cur_primary->start;
+$psql_rc = $cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (15, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (15, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (16, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (16, 'issued to ${cur_primary_name}');
-- This prepare can fail due to conflicting GID or locks conflicts if
-- replay did not fully cleanup its state on previous commit.
PREPARE TRANSACTION 'xact_009_7';");
is($psql_rc, '0', "Cleanup of shared memory state for 2PC commit");
-$cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_7'");
+$cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_7'");
###############################################################################
# Check that WAL replay will cleanup its shared memory state on running standby.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (17, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (17, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (18, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (18, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_8';
COMMIT PREPARED 'xact_009_8';");
$cur_standby->psql(
@@ -186,15 +186,15 @@ is($psql_out, '0',
# prepare and commit to use on-disk twophase files.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (19, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (19, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (20, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (20, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_9';");
$cur_standby->psql('postgres', "CHECKPOINT");
-$cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_9'");
+$cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_9'");
$cur_standby->psql(
'postgres',
"SELECT count(*) FROM pg_prepared_xacts",
@@ -206,114 +206,114 @@ is($psql_out, '0',
# Check that prepared transactions can be committed on promoted standby.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (21, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (21, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (22, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (22, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_10';");
-$cur_master->teardown_node;
+$cur_primary->teardown_node;
$cur_standby->promote;
# change roles
-note "Now paris is master and london is standby";
-($cur_master, $cur_standby) = ($node_paris, $node_london);
-$cur_master_name = $cur_master->name;
+note "Now paris is primary and london is standby";
+($cur_primary, $cur_standby) = ($node_paris, $node_london);
+$cur_primary_name = $cur_primary->name;
# because london is not running at this point, we can't use syncrep commit
# on this command
-$psql_rc = $cur_master->psql('postgres',
+$psql_rc = $cur_primary->psql('postgres',
"SET synchronous_commit = off; COMMIT PREPARED 'xact_009_10'");
is($psql_rc, '0', "Restore of prepared transaction on promoted standby");
-# restart old master as new standby
-$cur_standby->enable_streaming($cur_master);
+# restart old primary as new standby
+$cur_standby->enable_streaming($cur_primary);
$cur_standby->start;
###############################################################################
# Check that prepared transactions are replayed after soft restart of standby
-# while master is down. Since standby knows that master is down it uses a
+# while primary is down. Since standby knows that primary is down it uses a
# different code path on startup to ensure that the status of transactions is
# consistent.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (23, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (23, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (24, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (24, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_11';");
-$cur_master->stop;
+$cur_primary->stop;
$cur_standby->restart;
$cur_standby->promote;
# change roles
-note "Now london is master and paris is standby";
-($cur_master, $cur_standby) = ($node_london, $node_paris);
-$cur_master_name = $cur_master->name;
+note "Now london is primary and paris is standby";
+($cur_primary, $cur_standby) = ($node_london, $node_paris);
+$cur_primary_name = $cur_primary->name;
-$cur_master->psql(
+$cur_primary->psql(
'postgres',
"SELECT count(*) FROM pg_prepared_xacts",
stdout => \$psql_out);
is($psql_out, '1',
- "Restore prepared transactions from files with master down");
+ "Restore prepared transactions from files with primary down");
-# restart old master as new standby
-$cur_standby->enable_streaming($cur_master);
+# restart old primary as new standby
+$cur_standby->enable_streaming($cur_primary);
$cur_standby->start;
-$cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_11'");
+$cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_11'");
###############################################################################
# Check that prepared transactions are correctly replayed after standby hard
-# restart while master is down.
+# restart while primary is down.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
- INSERT INTO t_009_tbl VALUES (25, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (25, 'issued to ${cur_primary_name}');
SAVEPOINT s1;
- INSERT INTO t_009_tbl VALUES (26, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl VALUES (26, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_12';
");
-$cur_master->stop;
+$cur_primary->stop;
$cur_standby->teardown_node;
$cur_standby->start;
$cur_standby->promote;
# change roles
-note "Now paris is master and london is standby";
-($cur_master, $cur_standby) = ($node_paris, $node_london);
-$cur_master_name = $cur_master->name;
+note "Now paris is primary and london is standby";
+($cur_primary, $cur_standby) = ($node_paris, $node_london);
+$cur_primary_name = $cur_primary->name;
-$cur_master->psql(
+$cur_primary->psql(
'postgres',
"SELECT count(*) FROM pg_prepared_xacts",
stdout => \$psql_out);
is($psql_out, '1',
- "Restore prepared transactions from records with master down");
+ "Restore prepared transactions from records with primary down");
-# restart old master as new standby
-$cur_standby->enable_streaming($cur_master);
+# restart old primary as new standby
+$cur_standby->enable_streaming($cur_primary);
$cur_standby->start;
-$cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_12'");
+$cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_12'");
###############################################################################
# Check for a lock conflict between prepared transaction with DDL inside and
# replay of XLOG_STANDBY_LOCK wal record.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
CREATE TABLE t_009_tbl2 (id int, msg text);
SAVEPOINT s1;
- INSERT INTO t_009_tbl2 VALUES (27, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl2 VALUES (27, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_13';
-- checkpoint will issue XLOG_STANDBY_LOCK that can conflict with lock
-- held by 'create table' statement
@@ -321,10 +321,10 @@ $cur_master->psql(
COMMIT PREPARED 'xact_009_13';");
# Ensure that last transaction is replayed on standby.
-my $cur_master_lsn =
- $cur_master->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
+my $cur_primary_lsn =
+ $cur_primary->safe_psql('postgres', "SELECT pg_current_wal_lsn()");
my $caughtup_query =
- "SELECT '$cur_master_lsn'::pg_lsn <= pg_last_wal_replay_lsn()";
+ "SELECT '$cur_primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()";
$cur_standby->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for standby to catch up";
@@ -336,69 +336,69 @@ is($psql_out, '1', "Replay prepared transaction with DDL");
###############################################################################
# Check recovery of prepared transaction with DDL inside after a hard restart
-# of the master.
+# of the primary.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
CREATE TABLE t_009_tbl3 (id int, msg text);
SAVEPOINT s1;
- INSERT INTO t_009_tbl3 VALUES (28, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl3 VALUES (28, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_14';
BEGIN;
CREATE TABLE t_009_tbl4 (id int, msg text);
SAVEPOINT s1;
- INSERT INTO t_009_tbl4 VALUES (29, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl4 VALUES (29, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_15';");
-$cur_master->teardown_node;
-$cur_master->start;
+$cur_primary->teardown_node;
+$cur_primary->start;
-$psql_rc = $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_14'");
+$psql_rc = $cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_14'");
is($psql_rc, '0', 'Commit prepared transaction after teardown');
-$psql_rc = $cur_master->psql('postgres', "ROLLBACK PREPARED 'xact_009_15'");
+$psql_rc = $cur_primary->psql('postgres', "ROLLBACK PREPARED 'xact_009_15'");
is($psql_rc, '0', 'Rollback prepared transaction after teardown');
###############################################################################
# Check recovery of prepared transaction with DDL inside after a soft restart
-# of the master.
+# of the primary.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres', "
BEGIN;
CREATE TABLE t_009_tbl5 (id int, msg text);
SAVEPOINT s1;
- INSERT INTO t_009_tbl5 VALUES (30, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl5 VALUES (30, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_16';
BEGIN;
CREATE TABLE t_009_tbl6 (id int, msg text);
SAVEPOINT s1;
- INSERT INTO t_009_tbl6 VALUES (31, 'issued to ${cur_master_name}');
+ INSERT INTO t_009_tbl6 VALUES (31, 'issued to ${cur_primary_name}');
PREPARE TRANSACTION 'xact_009_17';");
-$cur_master->stop;
-$cur_master->start;
+$cur_primary->stop;
+$cur_primary->start;
-$psql_rc = $cur_master->psql('postgres', "COMMIT PREPARED 'xact_009_16'");
+$psql_rc = $cur_primary->psql('postgres', "COMMIT PREPARED 'xact_009_16'");
is($psql_rc, '0', 'Commit prepared transaction after restart');
-$psql_rc = $cur_master->psql('postgres', "ROLLBACK PREPARED 'xact_009_17'");
+$psql_rc = $cur_primary->psql('postgres', "ROLLBACK PREPARED 'xact_009_17'");
is($psql_rc, '0', 'Rollback prepared transaction after restart');
###############################################################################
# Verify expected data appears on both servers.
###############################################################################
-$cur_master->psql(
+$cur_primary->psql(
'postgres',
"SELECT count(*) FROM pg_prepared_xacts",
stdout => \$psql_out);
-is($psql_out, '0', "No uncommitted prepared transactions on master");
+is($psql_out, '0', "No uncommitted prepared transactions on primary");
-$cur_master->psql(
+$cur_primary->psql(
'postgres',
"SELECT * FROM t_009_tbl ORDER BY id",
stdout => \$psql_out);
@@ -424,15 +424,15 @@ is( $psql_out, qq{1|issued to london
24|issued to paris
25|issued to london
26|issued to london},
- "Check expected t_009_tbl data on master");
+ "Check expected t_009_tbl data on primary");
-$cur_master->psql(
+$cur_primary->psql(
'postgres',
"SELECT * FROM t_009_tbl2",
stdout => \$psql_out);
is( $psql_out,
qq{27|issued to paris},
- "Check expected t_009_tbl2 data on master");
+ "Check expected t_009_tbl2 data on primary");
$cur_standby->psql(
'postgres',
diff --git a/src/test/recovery/t/010_logical_decoding_timelines.pl b/src/test/recovery/t/010_logical_decoding_timelines.pl
index 11f5595e2bf..09aaefa9f03 100644
--- a/src/test/recovery/t/010_logical_decoding_timelines.pl
+++ b/src/test/recovery/t/010_logical_decoding_timelines.pl
@@ -30,10 +30,10 @@ use Scalar::Util qw(blessed);
my ($stdout, $stderr, $ret);
-# Initialize master node
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1, has_archiving => 1);
-$node_master->append_conf(
+# Initialize primary node
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1, has_archiving => 1);
+$node_primary->append_conf(
'postgresql.conf', q[
wal_level = 'logical'
max_replication_slots = 3
@@ -42,38 +42,38 @@ log_min_messages = 'debug2'
hot_standby_feedback = on
wal_receiver_status_interval = 1
]);
-$node_master->dump_info;
-$node_master->start;
+$node_primary->dump_info;
+$node_primary->start;
note "testing logical timeline following with a filesystem-level copy";
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"SELECT pg_create_logical_replication_slot('before_basebackup', 'test_decoding');"
);
-$node_master->safe_psql('postgres', "CREATE TABLE decoding(blah text);");
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres', "CREATE TABLE decoding(blah text);");
+$node_primary->safe_psql('postgres',
"INSERT INTO decoding(blah) VALUES ('beforebb');");
# We also want to verify that DROP DATABASE on a standby with a logical
# slot works. This isn't strictly related to timeline following, but
# the only way to get a logical slot on a standby right now is to use
# the same physical copy trick, so:
-$node_master->safe_psql('postgres', 'CREATE DATABASE dropme;');
-$node_master->safe_psql('dropme',
+$node_primary->safe_psql('postgres', 'CREATE DATABASE dropme;');
+$node_primary->safe_psql('dropme',
"SELECT pg_create_logical_replication_slot('dropme_slot', 'test_decoding');"
);
-$node_master->safe_psql('postgres', 'CHECKPOINT;');
+$node_primary->safe_psql('postgres', 'CHECKPOINT;');
my $backup_name = 'b1';
-$node_master->backup_fs_hot($backup_name);
+$node_primary->backup_fs_hot($backup_name);
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
q[SELECT pg_create_physical_replication_slot('phys_slot');]);
my $node_replica = get_new_node('replica');
$node_replica->init_from_backup(
- $node_master, $backup_name,
+ $node_primary, $backup_name,
has_streaming => 1,
has_restoring => 1);
$node_replica->append_conf('postgresql.conf',
@@ -81,26 +81,26 @@ $node_replica->append_conf('postgresql.conf',
$node_replica->start;
-# If we drop 'dropme' on the master, the standby should drop the
+# If we drop 'dropme' on the primary, the standby should drop the
# db and associated slot.
-is($node_master->psql('postgres', 'DROP DATABASE dropme'),
- 0, 'dropped DB with logical slot OK on master');
-$node_master->wait_for_catchup($node_replica, 'replay',
- $node_master->lsn('insert'));
+is($node_primary->psql('postgres', 'DROP DATABASE dropme'),
+ 0, 'dropped DB with logical slot OK on primary');
+$node_primary->wait_for_catchup($node_replica, 'replay',
+ $node_primary->lsn('insert'));
is( $node_replica->safe_psql(
'postgres', q[SELECT 1 FROM pg_database WHERE datname = 'dropme']),
'',
'dropped DB dropme on standby');
-is($node_master->slot('dropme_slot')->{'slot_name'},
+is($node_primary->slot('dropme_slot')->{'slot_name'},
undef, 'logical slot was actually dropped on standby');
# Back to testing failover...
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"SELECT pg_create_logical_replication_slot('after_basebackup', 'test_decoding');"
);
-$node_master->safe_psql('postgres',
+$node_primary->safe_psql('postgres',
"INSERT INTO decoding(blah) VALUES ('afterbb');");
-$node_master->safe_psql('postgres', 'CHECKPOINT;');
+$node_primary->safe_psql('postgres', 'CHECKPOINT;');
# Verify that only the before base_backup slot is on the replica
$stdout = $node_replica->safe_psql('postgres',
@@ -109,20 +109,20 @@ is($stdout, 'before_basebackup',
'Expected to find only slot before_basebackup on replica');
# Examine the physical slot the replica uses to stream changes
-# from the master to make sure its hot_standby_feedback
+# from the primary to make sure its hot_standby_feedback
# has locked in a catalog_xmin on the physical slot, and that
# any xmin is < the catalog_xmin
-$node_master->poll_query_until(
+$node_primary->poll_query_until(
'postgres', q[
SELECT catalog_xmin IS NOT NULL
FROM pg_replication_slots
WHERE slot_name = 'phys_slot'
]) or die "slot's catalog_xmin never became set";
-my $phys_slot = $node_master->slot('phys_slot');
-isnt($phys_slot->{'xmin'}, '', 'xmin assigned on physical slot of master');
+my $phys_slot = $node_primary->slot('phys_slot');
+isnt($phys_slot->{'xmin'}, '', 'xmin assigned on physical slot of primary');
isnt($phys_slot->{'catalog_xmin'},
- '', 'catalog_xmin assigned on physical slot of master');
+ '', 'catalog_xmin assigned on physical slot of primary');
# Ignore wrap-around here, we're on a new cluster:
cmp_ok(
@@ -130,11 +130,11 @@ cmp_ok(
$phys_slot->{'catalog_xmin'},
'xmin on physical slot must not be lower than catalog_xmin');
-$node_master->safe_psql('postgres', 'CHECKPOINT');
-$node_master->wait_for_catchup($node_replica, 'write');
+$node_primary->safe_psql('postgres', 'CHECKPOINT');
+$node_primary->wait_for_catchup($node_replica, 'write');
# Boom, crash
-$node_master->stop('immediate');
+$node_primary->stop('immediate');
$node_replica->promote;
diff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl
index ca6e92b50df..5fe917978c6 100644
--- a/src/test/recovery/t/011_crash_recovery.pl
+++ b/src/test/recovery/t/011_crash_recovery.pl
@@ -18,7 +18,7 @@ else
plan tests => 3;
}
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init(allows_streaming => 1);
$node->start;
diff --git a/src/test/recovery/t/012_subtransactions.pl b/src/test/recovery/t/012_subtransactions.pl
index 292cd40fe2d..6b9e29ae3c7 100644
--- a/src/test/recovery/t/012_subtransactions.pl
+++ b/src/test/recovery/t/012_subtransactions.pl
@@ -6,30 +6,30 @@ use PostgresNode;
use TestLib;
use Test::More tests => 12;
-# Setup master node
-my $node_master = get_new_node("master");
-$node_master->init(allows_streaming => 1);
-$node_master->append_conf(
+# Setup primary node
+my $node_primary = get_new_node("primary");
+$node_primary->init(allows_streaming => 1);
+$node_primary->append_conf(
'postgresql.conf', qq(
max_prepared_transactions = 10
log_checkpoints = true
));
-$node_master->start;
-$node_master->backup('master_backup');
-$node_master->psql('postgres', "CREATE TABLE t_012_tbl (id int)");
+$node_primary->start;
+$node_primary->backup('primary_backup');
+$node_primary->psql('postgres', "CREATE TABLE t_012_tbl (id int)");
# Setup standby node
my $node_standby = get_new_node('standby');
-$node_standby->init_from_backup($node_master, 'master_backup',
+$node_standby->init_from_backup($node_primary, 'primary_backup',
has_streaming => 1);
$node_standby->start;
# Switch to synchronous replication
-$node_master->append_conf(
+$node_primary->append_conf(
'postgresql.conf', qq(
synchronous_standby_names = '*'
));
-$node_master->psql('postgres', "SELECT pg_reload_conf()");
+$node_primary->psql('postgres', "SELECT pg_reload_conf()");
my $psql_out = '';
my $psql_rc = '';
@@ -39,7 +39,7 @@ my $psql_rc = '';
# so that it won't conflict with savepoint xids.
###############################################################################
-$node_master->psql(
+$node_primary->psql(
'postgres', "
BEGIN;
DELETE FROM t_012_tbl;
@@ -57,9 +57,9 @@ $node_master->psql(
PREPARE TRANSACTION 'xact_012_1';
CHECKPOINT;");
-$node_master->stop;
-$node_master->start;
-$node_master->psql(
+$node_primary->stop;
+$node_primary->start;
+$node_primary->psql(
'postgres', "
-- here we can get xid of previous savepoint if nextXid
-- wasn't properly advanced
@@ -68,7 +68,7 @@ $node_master->psql(
ROLLBACK;
COMMIT PREPARED 'xact_012_1';");
-$node_master->psql(
+$node_primary->psql(
'postgres',
"SELECT count(*) FROM t_012_tbl",
stdout => \$psql_out);
@@ -79,10 +79,10 @@ is($psql_out, '6', "Check nextXid handling for prepared subtransactions");
# PGPROC_MAX_CACHED_SUBXIDS subtransactions and also show data properly
# on promotion
###############################################################################
-$node_master->psql('postgres', "DELETE FROM t_012_tbl");
+$node_primary->psql('postgres', "DELETE FROM t_012_tbl");
# Function borrowed from src/test/regress/sql/hs_primary_extremes.sql
-$node_master->psql(
+$node_primary->psql(
'postgres', "
CREATE OR REPLACE FUNCTION hs_subxids (n integer)
RETURNS void
@@ -95,19 +95,19 @@ $node_master->psql(
RETURN;
EXCEPTION WHEN raise_exception THEN NULL; END;
\$\$;");
-$node_master->psql(
+$node_primary->psql(
'postgres', "
BEGIN;
SELECT hs_subxids(127);
COMMIT;");
-$node_master->wait_for_catchup($node_standby, 'replay',
- $node_master->lsn('insert'));
+$node_primary->wait_for_catchup($node_standby, 'replay',
+ $node_primary->lsn('insert'));
$node_standby->psql(
'postgres',
"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
stdout => \$psql_out);
is($psql_out, '8128', "Visible");
-$node_master->stop;
+$node_primary->stop;
$node_standby->promote;
$node_standby->psql(
@@ -117,8 +117,8 @@ $node_standby->psql(
is($psql_out, '8128', "Visible");
# restore state
-($node_master, $node_standby) = ($node_standby, $node_master);
-$node_standby->enable_streaming($node_master);
+($node_primary, $node_standby) = ($node_standby, $node_primary);
+$node_standby->enable_streaming($node_primary);
$node_standby->start;
$node_standby->psql(
'postgres',
@@ -126,10 +126,10 @@ $node_standby->psql(
stdout => \$psql_out);
is($psql_out, '8128', "Visible");
-$node_master->psql('postgres', "DELETE FROM t_012_tbl");
+$node_primary->psql('postgres', "DELETE FROM t_012_tbl");
# Function borrowed from src/test/regress/sql/hs_primary_extremes.sql
-$node_master->psql(
+$node_primary->psql(
'postgres', "
CREATE OR REPLACE FUNCTION hs_subxids (n integer)
RETURNS void
@@ -142,19 +142,19 @@ $node_master->psql(
RETURN;
EXCEPTION WHEN raise_exception THEN NULL; END;
\$\$;");
-$node_master->psql(
+$node_primary->psql(
'postgres', "
BEGIN;
SELECT hs_subxids(127);
PREPARE TRANSACTION 'xact_012_1';");
-$node_master->wait_for_catchup($node_standby, 'replay',
- $node_master->lsn('insert'));
+$node_primary->wait_for_catchup($node_standby, 'replay',
+ $node_primary->lsn('insert'));
$node_standby->psql(
'postgres',
"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
stdout => \$psql_out);
is($psql_out, '-1', "Not visible");
-$node_master->stop;
+$node_primary->stop;
$node_standby->promote;
$node_standby->psql(
@@ -164,34 +164,34 @@ $node_standby->psql(
is($psql_out, '-1', "Not visible");
# restore state
-($node_master, $node_standby) = ($node_standby, $node_master);
-$node_standby->enable_streaming($node_master);
+($node_primary, $node_standby) = ($node_standby, $node_primary);
+$node_standby->enable_streaming($node_primary);
$node_standby->start;
-$psql_rc = $node_master->psql('postgres', "COMMIT PREPARED 'xact_012_1'");
+$psql_rc = $node_primary->psql('postgres', "COMMIT PREPARED 'xact_012_1'");
is($psql_rc, '0',
"Restore of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby"
);
-$node_master->psql(
+$node_primary->psql(
'postgres',
"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
stdout => \$psql_out);
is($psql_out, '8128', "Visible");
-$node_master->psql('postgres', "DELETE FROM t_012_tbl");
-$node_master->psql(
+$node_primary->psql('postgres', "DELETE FROM t_012_tbl");
+$node_primary->psql(
'postgres', "
BEGIN;
SELECT hs_subxids(201);
PREPARE TRANSACTION 'xact_012_1';");
-$node_master->wait_for_catchup($node_standby, 'replay',
- $node_master->lsn('insert'));
+$node_primary->wait_for_catchup($node_standby, 'replay',
+ $node_primary->lsn('insert'));
$node_standby->psql(
'postgres',
"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
stdout => \$psql_out);
is($psql_out, '-1', "Not visible");
-$node_master->stop;
+$node_primary->stop;
$node_standby->promote;
$node_standby->psql(
@@ -201,15 +201,15 @@ $node_standby->psql(
is($psql_out, '-1', "Not visible");
# restore state
-($node_master, $node_standby) = ($node_standby, $node_master);
-$node_standby->enable_streaming($node_master);
+($node_primary, $node_standby) = ($node_standby, $node_primary);
+$node_standby->enable_streaming($node_primary);
$node_standby->start;
-$psql_rc = $node_master->psql('postgres', "ROLLBACK PREPARED 'xact_012_1'");
+$psql_rc = $node_primary->psql('postgres', "ROLLBACK PREPARED 'xact_012_1'");
is($psql_rc, '0',
"Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby"
);
-$node_master->psql(
+$node_primary->psql(
'postgres',
"SELECT coalesce(sum(id),-1) FROM t_012_tbl",
stdout => \$psql_out);
diff --git a/src/test/recovery/t/013_crash_restart.pl b/src/test/recovery/t/013_crash_restart.pl
index 2c477978e7d..95d7bb62425 100644
--- a/src/test/recovery/t/013_crash_restart.pl
+++ b/src/test/recovery/t/013_crash_restart.pl
@@ -25,7 +25,7 @@ plan tests => 18;
# is really wrong.
my $psql_timeout = IPC::Run::timer(60);
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init(allows_streaming => 1);
$node->start();
diff --git a/src/test/recovery/t/019_replslot_limit.pl b/src/test/recovery/t/019_replslot_limit.pl
index cba7df920c0..5f260b04d67 100644
--- a/src/test/recovery/t/019_replslot_limit.pl
+++ b/src/test/recovery/t/019_replslot_limit.pl
@@ -13,21 +13,21 @@ use Time::HiRes qw(usleep);
$ENV{PGDATABASE} = 'postgres';
-# Initialize master node, setting wal-segsize to 1MB
-my $node_master = get_new_node('master');
-$node_master->init(allows_streaming => 1, extra => ['--wal-segsize=1']);
-$node_master->append_conf(
+# Initialize primary node, setting wal-segsize to 1MB
+my $node_primary = get_new_node('primary');
+$node_primary->init(allows_streaming => 1, extra => ['--wal-segsize=1']);
+$node_primary->append_conf(
'postgresql.conf', qq(
min_wal_size = 2MB
max_wal_size = 4MB
log_checkpoints = yes
));
-$node_master->start;
-$node_master->safe_psql('postgres',
+$node_primary->start;
+$node_primary->safe_psql('postgres',
"SELECT pg_create_physical_replication_slot('rep1')");
# The slot state and remain should be null before the first connection
-my $result = $node_master->safe_psql('postgres',
+my $result = $node_primary->safe_psql('postgres',
"SELECT restart_lsn IS NULL, wal_status is NULL, min_safe_lsn is NULL FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "t|t|t", 'check the state of non-reserved slot is "unknown"');
@@ -35,133 +35,133 @@ is($result, "t|t|t", 'check the state of non-reserved slot is "unknown"');
# Take backup
my $backup_name = 'my_backup';
-$node_master->backup($backup_name);
+$node_primary->backup($backup_name);
# Create a standby linking to it using the replication slot
my $node_standby = get_new_node('standby_1');
-$node_standby->init_from_backup($node_master, $backup_name,
+$node_standby->init_from_backup($node_primary, $backup_name,
has_streaming => 1);
$node_standby->append_conf('postgresql.conf', "primary_slot_name = 'rep1'");
$node_standby->start;
# Wait until standby has replayed enough data
-my $start_lsn = $node_master->lsn('write');
-$node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);
+my $start_lsn = $node_primary->lsn('write');
+$node_primary->wait_for_catchup($node_standby, 'replay', $start_lsn);
# Stop standby
$node_standby->stop;
# Preparation done, the slot is the state "normal" now
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status, min_safe_lsn is NULL FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "normal|t", 'check the catching-up state');
-# Advance WAL by five segments (= 5MB) on master
-advance_wal($node_master, 1);
-$node_master->safe_psql('postgres', "CHECKPOINT;");
+# Advance WAL by five segments (= 5MB) on primary
+advance_wal($node_primary, 1);
+$node_primary->safe_psql('postgres', "CHECKPOINT;");
# The slot is always "safe" when fitting max_wal_size
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status, min_safe_lsn is NULL FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "normal|t", 'check that it is safe if WAL fits in max_wal_size');
-advance_wal($node_master, 4);
-$node_master->safe_psql('postgres', "CHECKPOINT;");
+advance_wal($node_primary, 4);
+$node_primary->safe_psql('postgres', "CHECKPOINT;");
# The slot is always "safe" when max_slot_wal_keep_size is not set
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status, min_safe_lsn is NULL FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "normal|t", 'check that slot is working');
-# The standby can reconnect to master
+# The standby can reconnect to primary
$node_standby->start;
-$start_lsn = $node_master->lsn('write');
-$node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);
+$start_lsn = $node_primary->lsn('write');
+$node_primary->wait_for_catchup($node_standby, 'replay', $start_lsn);
$node_standby->stop;
-# Set max_slot_wal_keep_size on master
+# Set max_slot_wal_keep_size on primary
my $max_slot_wal_keep_size_mb = 6;
-$node_master->append_conf(
+$node_primary->append_conf(
'postgresql.conf', qq(
max_slot_wal_keep_size = ${max_slot_wal_keep_size_mb}MB
));
-$node_master->reload;
+$node_primary->reload;
# The slot is in safe state. The distance from the min_safe_lsn should
# be as almost (max_slot_wal_keep_size - 1) times large as the segment
# size
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep1'");
is($result, "normal", 'check that max_slot_wal_keep_size is working');
# Advance WAL again then checkpoint, reducing remain by 2 MB.
-advance_wal($node_master, 2);
-$node_master->safe_psql('postgres', "CHECKPOINT;");
+advance_wal($node_primary, 2);
+$node_primary->safe_psql('postgres', "CHECKPOINT;");
# The slot is still working
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep1'");
is($result, "normal",
'check that min_safe_lsn gets close to the current LSN');
-# The standby can reconnect to master
+# The standby can reconnect to primary
$node_standby->start;
-$start_lsn = $node_master->lsn('write');
-$node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);
+$start_lsn = $node_primary->lsn('write');
+$node_primary->wait_for_catchup($node_standby, 'replay', $start_lsn);
$node_standby->stop;
# wal_keep_segments overrides max_slot_wal_keep_size
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"ALTER SYSTEM SET wal_keep_segments to 8; SELECT pg_reload_conf();");
# Advance WAL again then checkpoint, reducing remain by 6 MB.
-advance_wal($node_master, 6);
-$result = $node_master->safe_psql('postgres',
+advance_wal($node_primary, 6);
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status as remain FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "normal",
'check that wal_keep_segments overrides max_slot_wal_keep_size');
# restore wal_keep_segments
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"ALTER SYSTEM SET wal_keep_segments to 0; SELECT pg_reload_conf();");
-# The standby can reconnect to master
+# The standby can reconnect to primary
$node_standby->start;
-$start_lsn = $node_master->lsn('write');
-$node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);
+$start_lsn = $node_primary->lsn('write');
+$node_primary->wait_for_catchup($node_standby, 'replay', $start_lsn);
$node_standby->stop;
# Advance WAL again without checkpoint, reducing remain by 6 MB.
-advance_wal($node_master, 6);
+advance_wal($node_primary, 6);
# Slot gets into 'reserved' state
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep1'");
is($result, "reserved", 'check that the slot state changes to "reserved"');
# do checkpoint so that the next checkpoint runs too early
-$node_master->safe_psql('postgres', "CHECKPOINT;");
+$node_primary->safe_psql('postgres', "CHECKPOINT;");
# Advance WAL again without checkpoint; remain goes to 0.
-advance_wal($node_master, 1);
+advance_wal($node_primary, 1);
# Slot gets into 'lost' state
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT wal_status, min_safe_lsn is NULL FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "lost|t", 'check that the slot state changes to "lost"');
-# The standby still can connect to master before a checkpoint
+# The standby still can connect to primary before a checkpoint
$node_standby->start;
-$start_lsn = $node_master->lsn('write');
-$node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);
+$start_lsn = $node_primary->lsn('write');
+$node_primary->wait_for_catchup($node_standby, 'replay', $start_lsn);
$node_standby->stop;
@@ -171,24 +171,24 @@ ok( !find_in_log(
'check that required WAL segments are still available');
# Advance WAL again, the slot loses the oldest segment.
-my $logstart = get_log_size($node_master);
-advance_wal($node_master, 7);
-$node_master->safe_psql('postgres', "CHECKPOINT;");
+my $logstart = get_log_size($node_primary);
+advance_wal($node_primary, 7);
+$node_primary->safe_psql('postgres', "CHECKPOINT;");
# WARNING should be issued
ok( find_in_log(
- $node_master,
+ $node_primary,
"invalidating slot \"rep1\" because its restart_lsn [0-9A-F/]+ exceeds max_slot_wal_keep_size",
$logstart),
'check that the warning is logged');
# This slot should be broken
-$result = $node_master->safe_psql('postgres',
+$result = $node_primary->safe_psql('postgres',
"SELECT slot_name, active, restart_lsn IS NULL, wal_status, min_safe_lsn FROM pg_replication_slots WHERE slot_name = 'rep1'"
);
is($result, "rep1|f|t||", 'check that the slot became inactive');
-# The standby no longer can connect to the master
+# The standby no longer can connect to the primary
$logstart = get_log_size($node_standby);
$node_standby->start;
@@ -207,39 +207,39 @@ for (my $i = 0; $i < 10000; $i++)
}
ok($failed, 'check that replication has been broken');
-$node_master->stop('immediate');
+$node_primary->stop('immediate');
$node_standby->stop('immediate');
-my $node_master2 = get_new_node('master2');
-$node_master2->init(allows_streaming => 1);
-$node_master2->append_conf(
+my $node_primary2 = get_new_node('primary2');
+$node_primary2->init(allows_streaming => 1);
+$node_primary2->append_conf(
'postgresql.conf', qq(
min_wal_size = 32MB
max_wal_size = 32MB
log_checkpoints = yes
));
-$node_master2->start;
-$node_master2->safe_psql('postgres',
+$node_primary2->start;
+$node_primary2->safe_psql('postgres',
"SELECT pg_create_physical_replication_slot('rep1')");
$backup_name = 'my_backup2';
-$node_master2->backup($backup_name);
+$node_primary2->backup($backup_name);
-$node_master2->stop;
-$node_master2->append_conf(
+$node_primary2->stop;
+$node_primary2->append_conf(
'postgresql.conf', qq(
max_slot_wal_keep_size = 0
));
-$node_master2->start;
+$node_primary2->start;
$node_standby = get_new_node('standby_2');
-$node_standby->init_from_backup($node_master2, $backup_name,
+$node_standby->init_from_backup($node_primary2, $backup_name,
has_streaming => 1);
$node_standby->append_conf('postgresql.conf', "primary_slot_name = 'rep1'");
$node_standby->start;
my @result =
split(
'\n',
- $node_master2->safe_psql(
+ $node_primary2->safe_psql(
'postgres',
"CREATE TABLE tt();
DROP TABLE tt;
@@ -255,7 +255,7 @@ sub advance_wal
{
my ($node, $n) = @_;
- # Advance by $n segments (= (16 * $n) MB) on master
+ # Advance by $n segments (= (16 * $n) MB) on primary
for (my $i = 0; $i < $n; $i++)
{
$node->safe_psql('postgres',
diff --git a/src/test/recovery/t/020_archive_status.pl b/src/test/recovery/t/020_archive_status.pl
index c18b737785d..c726453417b 100644
--- a/src/test/recovery/t/020_archive_status.pl
+++ b/src/test/recovery/t/020_archive_status.pl
@@ -8,7 +8,7 @@ use TestLib;
use Test::More tests => 16;
use Config;
-my $primary = get_new_node('master');
+my $primary = get_new_node('primary');
$primary->init(
has_archiving => 1,
allows_streaming => 1);
diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl
index a454bb0274a..91817c28e75 100644
--- a/src/test/ssl/t/001_ssltests.pl
+++ b/src/test/ssl/t/001_ssltests.pl
@@ -59,7 +59,7 @@ chmod 0644, "ssl/client_wrongperms_tmp.key";
#### Set up the server.
note "setting up data directory";
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init;
# PGHOST is enforced here to set up the node, subsequent connections
diff --git a/src/test/ssl/t/002_scram.pl b/src/test/ssl/t/002_scram.pl
index ee6e26d7323..69a14b142e5 100644
--- a/src/test/ssl/t/002_scram.pl
+++ b/src/test/ssl/t/002_scram.pl
@@ -35,7 +35,7 @@ my $common_connstr;
# Set up the server.
note "setting up data directory";
-my $node = get_new_node('master');
+my $node = get_new_node('primary');
$node->init;
# PGHOST is enforced here to set up the node, subsequent connections
--
2.25.0.114.g5b0ca878e0
v1-0002-code-s-master-primary.patchtext/x-diff; charset=us-asciiDownload
From 90ad12a0c17599820e23a0bf555890ae41d80a40 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 14 Jun 2020 14:05:18 -0700
Subject: [PATCH v1 2/8] code: s/master/primary/
Also changed "in the primary" to "on the primary", and added a few
"the" before "primary".
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/include/access/xlog.h | 4 +-
src/include/tcop/utility.h | 2 +-
src/include/utils/guc_tables.h | 2 +-
src/backend/access/common/bufmask.c | 4 +-
src/backend/access/gist/gistxlog.c | 2 +-
src/backend/access/heap/heapam.c | 12 ++--
src/backend/access/heap/pruneheap.c | 2 +-
src/backend/access/nbtree/README | 2 +-
src/backend/access/nbtree/nbtxlog.c | 2 +-
src/backend/access/transam/commit_ts.c | 12 ++--
src/backend/access/transam/xlog.c | 56 +++++++++----------
src/backend/access/transam/xlogutils.c | 6 +-
src/backend/catalog/namespace.c | 2 +-
src/backend/commands/tablecmds.c | 2 +-
src/backend/postmaster/postmaster.c | 4 +-
src/backend/replication/README | 6 +-
src/backend/replication/basebackup.c | 2 +-
src/backend/replication/logical/worker.c | 4 +-
src/backend/replication/walreceiver.c | 32 +++++------
src/backend/replication/walsender.c | 8 +--
src/backend/storage/ipc/procarray.c | 20 +++----
src/backend/storage/ipc/standby.c | 2 +-
src/backend/storage/lmgr/README | 2 +-
src/backend/storage/page/README | 2 +-
src/backend/utils/misc/guc.c | 8 +--
src/backend/utils/misc/postgresql.conf.sample | 8 +--
src/bin/pg_basebackup/pg_recvlogical.c | 2 +-
src/bin/pg_basebackup/receivelog.c | 4 +-
src/bin/pg_rewind/copy_fetch.c | 2 +-
src/bin/pg_rewind/filemap.c | 2 +-
src/bin/pg_rewind/parsexlog.c | 2 +-
31 files changed, 110 insertions(+), 110 deletions(-)
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index e917dfe92d8..afbfca6be86 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -50,7 +50,7 @@ extern bool InRecovery;
*
* In INITIALIZED state, we've run InitRecoveryTransactionEnvironment, but
* we haven't yet processed a RUNNING_XACTS or shutdown-checkpoint WAL record
- * to initialize our master-transaction tracking system.
+ * to initialize our primary-transaction tracking system.
*
* When the transaction tracking is initialized, we enter the SNAPSHOT_PENDING
* state. The tracked information might still be incomplete, so we can't allow
@@ -58,7 +58,7 @@ extern bool InRecovery;
* appropriate.
*
* In SNAPSHOT_READY mode, we have full knowledge of transactions that are
- * (or were) running in the master at the current WAL location. Snapshots
+ * (or were) running on the primary at the current WAL location. Snapshots
* can be taken, and read-only queries can be run.
*/
typedef enum
diff --git a/src/include/tcop/utility.h b/src/include/tcop/utility.h
index 4aec19a0087..9594856c88a 100644
--- a/src/include/tcop/utility.h
+++ b/src/include/tcop/utility.h
@@ -51,7 +51,7 @@ typedef struct AlterTableUtilityContext
*
* COMMAND_OK_IN_RECOVERY means that the command is permissible even when in
* recovery. It can't write WAL, nor can it do things that would imperil
- * replay of future WAL received from the master.
+ * replay of future WAL received from the primary.
*/
#define COMMAND_OK_IN_READ_ONLY_TXN 0x0001
#define COMMAND_OK_IN_PARALLEL_MODE 0x0002
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 454c2df4878..04431d0eb25 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -73,7 +73,7 @@ enum config_group
WAL_RECOVERY_TARGET,
REPLICATION,
REPLICATION_SENDING,
- REPLICATION_MASTER,
+ REPLICATION_PRIMARY,
REPLICATION_STANDBY,
REPLICATION_SUBSCRIBERS,
QUERY_TUNING,
diff --git a/src/backend/access/common/bufmask.c b/src/backend/access/common/bufmask.c
index 8dcc747b94a..4bdb1848ad2 100644
--- a/src/backend/access/common/bufmask.c
+++ b/src/backend/access/common/bufmask.c
@@ -88,8 +88,8 @@ mask_unused_space(Page page)
/*
* mask_lp_flags
*
- * In some index AMs, line pointer flags can be modified in master without
- * emitting any WAL record.
+ * In some index AMs, line pointer flags can be modified on the primary
+ * without emitting any WAL record.
*/
void
mask_lp_flags(Page page)
diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c
index b60dba052fa..3f0effd5e42 100644
--- a/src/backend/access/gist/gistxlog.c
+++ b/src/backend/access/gist/gistxlog.c
@@ -391,7 +391,7 @@ gistRedoPageReuse(XLogReaderState *record)
* RecentGlobalXmin test in gistPageRecyclable() conceptually mirrors the
* pgxact->xmin > limitXmin test in GetConflictingVirtualXIDs().
* Consequently, one XID value achieves the same exclusion effect on
- * master and standby.
+ * primary and standby.
*/
if (InHotStandby)
{
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 537913d1bb3..7bd45703aa6 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -410,10 +410,10 @@ heapgetpage(TableScanDesc sscan, BlockNumber page)
* visible to everyone, we can skip the per-tuple visibility tests.
*
* Note: In hot standby, a tuple that's already visible to all
- * transactions in the master might still be invisible to a read-only
+ * transactions on the primary might still be invisible to a read-only
* transaction in the standby. We partly handle this problem by tracking
* the minimum xmin of visible tuples as the cut-off XID while marking a
- * page all-visible on master and WAL log that along with the visibility
+ * page all-visible on the primary and WAL log that along with the visibility
* map SET operation. In hot standby, we wait for (or abort) all
* transactions that can potentially may not see one or more tuples on the
* page. That's how index-only scans work fine in hot standby. A crucial
@@ -6889,7 +6889,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
* updated/deleted by the inserting transaction.
*
* Look for a committed hint bit, or if no xmin bit is set, check clog.
- * This needs to work on both master and standby, where it is used to
+ * This needs to work on both primary and standby, where it is used to
* assess btree delete records.
*/
if (HeapTupleHeaderXminCommitted(tuple) ||
@@ -6951,9 +6951,9 @@ xid_horizon_prefetch_buffer(Relation rel,
* tuples being deleted.
*
* We used to do this during recovery rather than on the primary, but that
- * approach now appears inferior. It meant that the master could generate
+ * approach now appears inferior. It meant that the primary could generate
* a lot of work for the standby without any back-pressure to slow down the
- * master, and it required the standby to have reached consistency, whereas
+ * primary, and it required the standby to have reached consistency, whereas
* we want to have correct information available even before that point.
*
* It's possible for this to generate a fair amount of I/O, since we may be
@@ -8943,7 +8943,7 @@ heap_mask(char *pagedata, BlockNumber blkno)
*
* During redo, heap_xlog_insert() sets t_ctid to current block
* number and self offset number. It doesn't care about any
- * speculative insertions in master. Hence, we set t_ctid to
+ * speculative insertions on the primary. Hence, we set t_ctid to
* current block number and self offset number to ignore any
* inconsistency.
*/
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 1794cfd8d9a..256df4de105 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -78,7 +78,7 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
/*
* We can't write WAL in recovery mode, so there's no point trying to
- * clean the page. The master will likely issue a cleaning WAL record soon
+ * clean the page. The primary will likely issue a cleaning WAL record soon
* anyway, so this is no particular loss.
*/
if (RecoveryInProgress())
diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README
index 216d419841c..32ad9e339a2 100644
--- a/src/backend/access/nbtree/README
+++ b/src/backend/access/nbtree/README
@@ -574,7 +574,7 @@ writers that insert on to the page being deleted.)
During recovery all index scans start with ignore_killed_tuples = false
and we never set kill_prior_tuple. We do this because the oldest xmin
-on the standby server can be older than the oldest xmin on the master
+on the standby server can be older than the oldest xmin on the primary
server, which means tuples can be marked LP_DEAD even when they are
still visible on the standby. We don't WAL log tuple LP_DEAD bits, but
they can still appear in the standby because of full page writes. So
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index 87a8612c28c..3c89a868836 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -931,7 +931,7 @@ btree_xlog_reuse_page(XLogReaderState *record)
* RecentGlobalXmin test in _bt_page_recyclable() conceptually mirrors the
* pgxact->xmin > limitXmin test in GetConflictingVirtualXIDs().
* Consequently, one XID value achieves the same exclusion effect on
- * master and standby.
+ * primary and standby.
*/
if (InHotStandby)
{
diff --git a/src/backend/access/transam/commit_ts.c b/src/backend/access/transam/commit_ts.c
index 9cdb1364359..182e5391f7b 100644
--- a/src/backend/access/transam/commit_ts.c
+++ b/src/backend/access/transam/commit_ts.c
@@ -392,7 +392,7 @@ error_commit_ts_disabled(void)
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("could not get commit timestamp data"),
RecoveryInProgress() ?
- errhint("Make sure the configuration parameter \"%s\" is set on the master server.",
+ errhint("Make sure the configuration parameter \"%s\" is set on the primary server.",
"track_commit_timestamp") :
errhint("Make sure the configuration parameter \"%s\" is set.",
"track_commit_timestamp")));
@@ -592,12 +592,12 @@ CommitTsParameterChange(bool newvalue, bool oldvalue)
{
/*
* If the commit_ts module is disabled in this server and we get word from
- * the master server that it is enabled there, activate it so that we can
+ * the primary server that it is enabled there, activate it so that we can
* replay future WAL records involving it; also mark it as active on
* pg_control. If the old value was already set, we already did this, so
* don't do anything.
*
- * If the module is disabled in the master, disable it here too, unless
+ * If the module is disabled in the primary, disable it here too, unless
* the module is enabled locally.
*
* Note this only runs in the recovery process, so an unlocked read is
@@ -616,12 +616,12 @@ CommitTsParameterChange(bool newvalue, bool oldvalue)
* Activate this module whenever necessary.
* This must happen during postmaster or standalone-backend startup,
* or during WAL replay anytime the track_commit_timestamp setting is
- * changed in the master.
+ * changed in the primary.
*
* The reason why this SLRU needs separate activation/deactivation functions is
* that it can be enabled/disabled during start and the activation/deactivation
- * on master is propagated to standby via replay. Other SLRUs don't have this
- * property and they can be just initialized during normal startup.
+ * on the primary is propagated to the standby via replay. Other SLRUs don't
+ * have this property and they can be just initialized during normal startup.
*
* This is in charge of creating the currently active segment, if it's not
* already there. The reason for this is that the server might have been
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 55cac186dc7..2e07ca00774 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -273,7 +273,7 @@ static bool restoredFromArchive = false;
/* Buffers dedicated to consistency checks of size BLCKSZ */
static char *replay_image_masked = NULL;
-static char *master_image_masked = NULL;
+static char *primary_image_masked = NULL;
/* options formerly taken from recovery.conf for archive recovery */
char *recoveryRestoreCommand = NULL;
@@ -782,7 +782,7 @@ typedef enum
XLOG_FROM_ANY = 0, /* request to read WAL from any source */
XLOG_FROM_ARCHIVE, /* restored using restore_command */
XLOG_FROM_PG_WAL, /* existing file in pg_wal */
- XLOG_FROM_STREAM /* streamed from master */
+ XLOG_FROM_STREAM /* streamed from primary */
} XLogSource;
/* human-readable names for XLogSources, for debugging output */
@@ -1476,21 +1476,21 @@ checkXLogConsistency(XLogReaderState *record)
* page here, a local buffer is fine to hold its contents and a mask
* can be directly applied on it.
*/
- if (!RestoreBlockImage(record, block_id, master_image_masked))
+ if (!RestoreBlockImage(record, block_id, primary_image_masked))
elog(ERROR, "failed to restore block image");
/*
- * If masking function is defined, mask both the master and replay
+ * If masking function is defined, mask both the primary and replay
* images
*/
if (RmgrTable[rmid].rm_mask != NULL)
{
RmgrTable[rmid].rm_mask(replay_image_masked, blkno);
- RmgrTable[rmid].rm_mask(master_image_masked, blkno);
+ RmgrTable[rmid].rm_mask(primary_image_masked, blkno);
}
- /* Time to compare the master and replay images. */
- if (memcmp(replay_image_masked, master_image_masked, BLCKSZ) != 0)
+ /* Time to compare the primary and replay images. */
+ if (memcmp(replay_image_masked, primary_image_masked, BLCKSZ) != 0)
{
elog(FATAL,
"inconsistent page found, rel %u/%u/%u, forknum %u, blkno %u",
@@ -2299,7 +2299,7 @@ CalculateCheckpointSegments(void)
* a) we keep WAL for only one checkpoint cycle (prior to PG11 we kept
* WAL for two checkpoint cycles to allow us to recover from the
* secondary checkpoint if the first checkpoint failed, though we
- * only did this on the master anyway, not on standby. Keeping just
+ * only did this on the primary anyway, not on standby. Keeping just
* one checkpoint simplifies processing and reduces disk space in
* many smaller databases.)
* b) during checkpoint, we consume checkpoint_completion_target *
@@ -3768,7 +3768,7 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)
* however, unless we actually find a valid segment. That way if there is
* neither a timeline history file nor a WAL segment in the archive, and
* streaming replication is set up, we'll read the timeline history file
- * streamed from the master when we start streaming, instead of recovering
+ * streamed from the primary when we start streaming, instead of recovering
* with a dummy history generated here.
*/
if (expectedTLEs)
@@ -6055,7 +6055,7 @@ SetRecoveryPause(bool recoveryPause)
/*
* When recovery_min_apply_delay is set, we wait long enough to make sure
- * certain record types are applied at least that interval behind the master.
+ * certain record types are applied at least that interval behind the primary.
*
* Returns true if we waited.
*
@@ -6237,7 +6237,7 @@ do { \
if ((currValue) < (minValue)) \
ereport(ERROR, \
(errcode(ERRCODE_INVALID_PARAMETER_VALUE), \
- errmsg("hot standby is not possible because %s = %d is a lower setting than on the master server (its value was %d)", \
+ errmsg("hot standby is not possible because %s = %d is a lower setting than on the primary server (its value was %d)", \
param_name, \
currValue, \
minValue))); \
@@ -6273,8 +6273,8 @@ CheckRequiredParameterValues(void)
{
if (ControlFile->wal_level < WAL_LEVEL_REPLICA)
ereport(ERROR,
- (errmsg("hot standby is not possible because wal_level was not set to \"replica\" or higher on the master server"),
- errhint("Either set wal_level to \"replica\" on the master, or turn off hot_standby here.")));
+ (errmsg("hot standby is not possible because wal_level was not set to \"replica\" or higher on the primary server"),
+ errhint("Either set wal_level to \"replica\" on the primary, or turn off hot_standby here.")));
/* We ignore autovacuum_max_workers when we make this test. */
RecoveryRequiresIntParameter("max_connections",
@@ -6500,7 +6500,7 @@ StartupXLOG(void)
* alignment, whereas palloc() will provide MAXALIGN'd storage.
*/
replay_image_masked = (char *) palloc(BLCKSZ);
- master_image_masked = (char *) palloc(BLCKSZ);
+ primary_image_masked = (char *) palloc(BLCKSZ);
if (read_backup_label(&checkPointLoc, &backupEndRequired,
&backupFromStandby))
@@ -6629,7 +6629,7 @@ StartupXLOG(void)
* know how far we need to replay the WAL before we reach consistency.
* This can happen for example if a base backup is taken from a
* running server using an atomic filesystem snapshot, without calling
- * pg_start/stop_backup. Or if you just kill a running master server
+ * pg_start/stop_backup. Or if you just kill a running primary server
* and put it into archive recovery by creating a recovery signal
* file.
*
@@ -6827,7 +6827,7 @@ StartupXLOG(void)
* ourselves - the history file of the recovery target timeline covers all
* the previous timelines in the history too - a cascading standby server
* might be interested in them. Or, if you archive the WAL from this
- * server to a different archive than the master, it'd be good for all the
+ * server to a different archive than the primary, it'd be good for all the
* history files to get archived there after failover, so that you can use
* one of the old timelines as a PITR target. Timeline history files are
* small, so it's better to copy them unnecessarily than not copy them and
@@ -7063,7 +7063,7 @@ StartupXLOG(void)
/*
* If we're beginning at a shutdown checkpoint, we know that
- * nothing was running on the master at this point. So fake-up an
+ * nothing was running on the primary at this point. So fake-up an
* empty running-xacts record and use that here and now. Recover
* additional standby state for prepared transactions.
*/
@@ -7231,7 +7231,7 @@ StartupXLOG(void)
}
/*
- * If we've been asked to lag the master, wait on latch until
+ * If we've been asked to lag the primary, wait on latch until
* enough time has passed.
*/
if (recoveryApplyDelay(xlogreader))
@@ -7346,7 +7346,7 @@ StartupXLOG(void)
/*
* If rm_redo called XLogRequestWalReceiverReply, then we wake
* up the receiver so that it notices the updated
- * lastReplayedEndRecPtr and sends a reply to the master.
+ * lastReplayedEndRecPtr and sends a reply to the primary.
*/
if (doRequestWalReceiverReply)
{
@@ -9943,7 +9943,7 @@ xlog_redo(XLogReaderState *record)
/*
* If we see a shutdown checkpoint, we know that nothing was running
- * on the master at this point. So fake-up an empty running-xacts
+ * on the primary at this point. So fake-up an empty running-xacts
* record and use that here and now. Recover additional standby state
* for prepared transactions.
*/
@@ -10658,7 +10658,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p,
"since last restartpoint"),
errhint("This means that the backup being taken on the standby "
"is corrupt and should not be used. "
- "Enable full_page_writes and run CHECKPOINT on the master, "
+ "Enable full_page_writes and run CHECKPOINT on the primary, "
"and then try an online backup again.")));
/*
@@ -10815,7 +10815,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p,
appendStringInfo(labelfile, "BACKUP METHOD: %s\n",
exclusive ? "pg_start_backup" : "streamed");
appendStringInfo(labelfile, "BACKUP FROM: %s\n",
- backup_started_in_recovery ? "standby" : "master");
+ backup_started_in_recovery ? "standby" : "primary");
appendStringInfo(labelfile, "START TIME: %s\n", strfbuf);
appendStringInfo(labelfile, "LABEL: %s\n", backupidstr);
appendStringInfo(labelfile, "START TIMELINE: %u\n", starttli);
@@ -11250,7 +11250,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)
"during online backup"),
errhint("This means that the backup being taken on the standby "
"is corrupt and should not be used. "
- "Enable full_page_writes and run CHECKPOINT on the master, "
+ "Enable full_page_writes and run CHECKPOINT on the primary, "
"and then try an online backup again.")));
@@ -11932,7 +11932,7 @@ retry:
Assert(readFile != -1);
/*
- * If the current segment is being streamed from master, calculate how
+ * If the current segment is being streamed from the primary, calculate how
* much of the current page we have received already. We know the
* requested record has been received, but this is for the benefit of
* future calls, to allow quick exit at the top of this function.
@@ -11993,8 +11993,8 @@ retry:
* example, imagine a scenario where a streaming replica is started up,
* and replay reaches a record that's split across two WAL segments. The
* first page is only available locally, in pg_wal, because it's already
- * been recycled in the master. The second page, however, is not present
- * in pg_wal, and we should stream it from the master. There is a recycled
+ * been recycled on the primary. The second page, however, is not present
+ * in pg_wal, and we should stream it from the primary. There is a recycled
* WAL segment present in pg_wal, with garbage contents, however. We would
* read the first page from the local WAL segment, but when reading the
* second page, we would read the bogus, recycled, WAL segment. If we
@@ -12154,7 +12154,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
* Failure while streaming. Most likely, we got here
* because streaming replication was terminated, or
* promotion was triggered. But we also get here if we
- * find an invalid record in the WAL streamed from master,
+ * find an invalid record in the WAL streamed from the primary,
* in which case something is seriously wrong. There's
* little chance that the problem will just go away, but
* PANIC is not good for availability either, especially
@@ -12515,7 +12515,7 @@ StartupRequestWalReceiverRestart(void)
* we're retrying the exact same record that we've tried previously, only
* complain the first time to keep the noise down. However, we only do when
* reading from pg_wal, because we don't expect any invalid records in archive
- * or in records streamed from master. Files in the archive should be complete,
+ * or in records streamed from the primary. Files in the archive should be complete,
* and we should never hit the end of WAL because we stop and wait for more WAL
* to arrive before replaying it.
*
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 322b0e8ff5b..b2ca0cd4cf3 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -654,8 +654,8 @@ XLogTruncateRelation(RelFileNode rnode, ForkNumber forkNum,
*
* We care about timelines in xlogreader when we might be reading xlog
* generated prior to a promotion, either if we're currently a standby in
- * recovery or if we're a promoted master reading xlogs generated by the old
- * master before our promotion.
+ * recovery or if we're a promoted primary reading xlogs generated by the old
+ * primary before our promotion.
*
* wantPage must be set to the start address of the page to read and
* wantLength to the amount of the page that will be read, up to
@@ -878,7 +878,7 @@ read_local_xlog_page(XLogReaderState *state, XLogRecPtr targetPagePtr,
* we actually read the xlog page, we might still try to read from the
* old (now renamed) segment and fail. There's not much we can do
* about this, but it can only happen when we're a leaf of a cascading
- * standby whose master gets promoted while we're decoding, so a
+ * standby whose primary gets promoted while we're decoding, so a
* one-off ERROR isn't too bad.
*/
XLogReadDetermineTimeline(state, targetPagePtr, reqLen);
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 2ec23016fe5..0152e3869ab 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -3969,7 +3969,7 @@ InitTempTableNamespace(void)
* Do not allow a Hot Standby session to make temp tables. Aside from
* problems with modifying the system catalogs, there is a naming
* conflict: pg_temp_N belongs to the session with BackendId N on the
- * master, not to a hot standby session with the same BackendId. We
+ * primary, not to a hot standby session with the same BackendId. We
* should not be able to get here anyway due to XactReadOnly checks, but
* let's just make real sure. Note that this also backstops various
* operations that allow XactReadOnly transactions to modify temp tables;
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 2ab02e01a02..fc7b375f485 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -3676,7 +3676,7 @@ AlterTableInternal(Oid relid, List *cmds, bool recurse)
* and does not travel through this section of code and cannot be combined with
* any of the subcommands given here.
*
- * Note that Hot Standby only knows about AccessExclusiveLocks on the master
+ * Note that Hot Standby only knows about AccessExclusiveLocks on the primary
* so any changes that might affect SELECTs running on standbys need to use
* AccessExclusiveLocks even if you think a lesser lock would do, unless you
* have a solution for that also.
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index b4d475bb0ba..dec02586c7f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -1059,8 +1059,8 @@ PostmasterMain(int argc, char *argv[])
* only during a few moments during a standby promotion. However there is
* a race condition: if pg_ctl promote is executed and creates the files
* during a promotion, the files can stay around even after the server is
- * brought up to new master. Then, if new standby starts by using the
- * backup taken from that master, the files can exist at the server
+ * brought up to be the primary. Then, if a new standby starts by using the
+ * backup taken from the new primary, the files can exist at the server
* startup and should be removed in order to avoid an unexpected
* promotion.
*
diff --git a/src/backend/replication/README b/src/backend/replication/README
index 8ccdd86e74b..c608ec12097 100644
--- a/src/backend/replication/README
+++ b/src/backend/replication/README
@@ -53,11 +53,11 @@ it. Before that, however, startup process fills in WalRcvData->conninfo
and WalRcvData->slotname, and initializes the starting point in
WalRcvData->receiveStart.
-As walreceiver receives WAL from the master server, and writes and flushes
+As walreceiver receives WAL from the primary server, and writes and flushes
it to disk (in pg_wal), it updates WalRcvData->flushedUpto and signals
the startup process to know how far WAL replay can advance.
-Walreceiver sends information about replication progress to the master server
+Walreceiver sends information about replication progress to the primary server
whenever it either writes or flushes new WAL, or the specified interval elapses.
This is used for reporting purpose.
@@ -68,7 +68,7 @@ At shutdown, postmaster handles walsender processes differently from regular
backends. It waits for regular backends to die before writing the
shutdown checkpoint and terminating pgarch and other auxiliary processes, but
that's not desirable for walsenders, because we want the standby servers to
-receive all the WAL, including the shutdown checkpoint, before the master
+receive all the WAL, including the shutdown checkpoint, before the primary
is shut down. Therefore postmaster treats walsenders like the pgarch process,
and instructs them to terminate at PM_SHUTDOWN_2 phase, after all regular
backends have died and checkpointer has issued the shutdown checkpoint.
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index 3b46bfe9ab0..02a344d7108 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -169,7 +169,7 @@ static const char *const excludeDirContents[] =
/*
* It is generally not useful to backup the contents of this directory
- * even if the intention is to restore to another master. See backup.sgml
+ * even if the intention is to restore to another primary. See backup.sgml
* for a more detailed description.
*/
"pg_replslot",
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index a752a1224d6..f90a896fc3e 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -1312,7 +1312,7 @@ apply_handle_truncate(StringInfo s)
}
/*
- * Even if we used CASCADE on the upstream master we explicitly default to
+ * Even if we used CASCADE on the upstream primary we explicitly default to
* replaying changes without further cascading. This might be later
* changeable with a user specified option.
*/
@@ -1661,7 +1661,7 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
* from the server for more than wal_receiver_timeout / 2, ping
* the server. Also, if it's been longer than
* wal_receiver_status_interval since the last update we sent,
- * send a status update to the master anyway, to report any
+ * send a status update to the primary anyway, to report any
* progress in applying WAL.
*/
bool requestReply = false;
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index d1ad75da87a..d5a9b568a68 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -357,8 +357,8 @@ WalReceiverMain(void)
/*
* Get any missing history files. We do this always, even when we're
* not interested in that timeline, so that if we're promoted to
- * become the master later on, we don't select the same timeline that
- * was already used in the current master. This isn't bullet-proof -
+ * become the primary later on, we don't select the same timeline that
+ * was already used in the current primary. This isn't bullet-proof -
* you'll need some external software to manage your cluster if you
* need to ensure that a unique timeline id is chosen in every case,
* but let's avoid the confusion of timeline id collisions where we
@@ -464,7 +464,7 @@ WalReceiverMain(void)
if (len > 0)
{
/*
- * Something was received from master, so reset
+ * Something was received from primary, so reset
* timeout
*/
last_recv_timestamp = GetCurrentTimestamp();
@@ -486,7 +486,7 @@ WalReceiverMain(void)
len = walrcv_receive(wrconn, &buf, &wait_fd);
}
- /* Let the master know that we received some data. */
+ /* Let the primary know that we received some data. */
XLogWalRcvSendReply(false, false);
/*
@@ -545,7 +545,7 @@ WalReceiverMain(void)
* wal_receiver_timeout / 2, ping the server. Also, if
* it's been longer than wal_receiver_status_interval
* since the last update we sent, send a status update to
- * the master anyway, to report any progress in applying
+ * the primary anyway, to report any progress in applying
* WAL.
*/
bool requestReply = false;
@@ -745,7 +745,7 @@ WalRcvFetchTimeLineHistoryFiles(TimeLineID first, TimeLineID last)
walrcv_readtimelinehistoryfile(wrconn, tli, &fname, &content, &len);
/*
- * Check that the filename on the master matches what we
+ * Check that the filename on the primary matches what we
* calculated ourselves. This is just a sanity check, it should
* always match.
*/
@@ -1034,7 +1034,7 @@ XLogWalRcvFlush(bool dying)
set_ps_display(activitymsg);
}
- /* Also let the master know that we made some progress */
+ /* Also let the primary know that we made some progress */
if (!dying)
{
XLogWalRcvSendReply(false, false);
@@ -1066,7 +1066,7 @@ XLogWalRcvSendReply(bool force, bool requestReply)
TimestampTz now;
/*
- * If the user doesn't want status to be reported to the master, be sure
+ * If the user doesn't want status to be reported to the primary, be sure
* to exit before doing anything at all.
*/
if (!force && wal_receiver_status_interval <= 0)
@@ -1080,7 +1080,7 @@ XLogWalRcvSendReply(bool force, bool requestReply)
* sent without taking any lock, but the apply position requires a spin
* lock, so we don't check that unless something else has changed or 10
* seconds have passed. This means that the apply WAL location will
- * appear, from the master's point of view, to lag slightly, but since
+ * appear, from the primary's point of view, to lag slightly, but since
* this is only for reporting purposes and only on idle systems, that's
* probably OK.
*/
@@ -1138,14 +1138,14 @@ XLogWalRcvSendHSFeedback(bool immed)
static TimestampTz sendTime = 0;
/* initially true so we always send at least one feedback message */
- static bool master_has_standby_xmin = true;
+ static bool primary_has_standby_xmin = true;
/*
- * If the user doesn't want status to be reported to the master, be sure
+ * If the user doesn't want status to be reported to the primary, be sure
* to exit before doing anything at all.
*/
if ((wal_receiver_status_interval <= 0 || !hot_standby_feedback) &&
- !master_has_standby_xmin)
+ !primary_has_standby_xmin)
return;
/* Get current timestamp. */
@@ -1168,7 +1168,7 @@ XLogWalRcvSendHSFeedback(bool immed)
* calls.
*
* Bailing out here also ensures that we don't send feedback until we've
- * read our own replication slot state, so we don't tell the master to
+ * read our own replication slot state, so we don't tell the primary to
* discard needed xmin or catalog_xmin from any slots that may exist on
* this replica.
*/
@@ -1230,9 +1230,9 @@ XLogWalRcvSendHSFeedback(bool immed)
pq_sendint32(&reply_message, catalog_xmin_epoch);
walrcv_send(wrconn, reply_message.data, reply_message.len);
if (TransactionIdIsValid(xmin) || TransactionIdIsValid(catalog_xmin))
- master_has_standby_xmin = true;
+ primary_has_standby_xmin = true;
else
- master_has_standby_xmin = false;
+ primary_has_standby_xmin = false;
}
/*
@@ -1291,7 +1291,7 @@ ProcessWalSndrMessage(XLogRecPtr walEnd, TimestampTz sendTime)
*
* This is called by the startup process whenever interesting xlog records
* are applied, so that walreceiver can check if it needs to send an apply
- * notification back to the master which may be waiting in a COMMIT with
+ * notification back to the primary which may be waiting in a COMMIT with
* synchronous_commit = remote_apply.
*/
void
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index e2477c47e0a..f66acb87206 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -2628,14 +2628,14 @@ XLogSendPhysical(void)
else
{
/*
- * Streaming the current timeline on a master.
+ * Streaming the current timeline on a primary.
*
* Attempt to send all data that's already been written out and
* fsync'd to disk. We cannot go further than what's been written out
* given the current implementation of WALRead(). And in any case
* it's unsafe to send WAL that is not securely down to disk on the
- * master: if the master subsequently crashes and restarts, standbys
- * must not have applied any WAL that got lost on the master.
+ * primary: if the primary subsequently crashes and restarts, standbys
+ * must not have applied any WAL that got lost on the primary.
*/
SendRqstPtr = GetFlushRecPtr();
}
@@ -2672,7 +2672,7 @@ XLogSendPhysical(void)
*
* Note: We might already have sent WAL > sendTimeLineValidUpto. The
* startup process will normally replay all WAL that has been received
- * from the master, before promoting, but if the WAL streaming is
+ * from the primary, before promoting, but if the WAL streaming is
* terminated at a WAL page boundary, the valid portion of the timeline
* might end in the middle of a WAL record. We might've already sent the
* first half of that partial WAL record to the cascading standby, so that
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index 3c2b369615f..b4485335644 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -18,7 +18,7 @@
* at need by checking for pid == 0.
*
* During hot standby, we also keep a list of XIDs representing transactions
- * that are known to be running in the master (or more precisely, were running
+ * that are known to be running on the primary (or more precisely, were running
* as of the current point in the WAL stream). This list is kept in the
* KnownAssignedXids array, and is updated by watching the sequence of
* arriving XIDs. This is necessary because if we leave those XIDs out of
@@ -27,7 +27,7 @@
* array represents standby processes, which by definition are not running
* transactions that have XIDs.
*
- * It is perhaps possible for a backend on the master to terminate without
+ * It is perhaps possible for a backend on the primary to terminate without
* writing an abort record for its transaction. While that shouldn't really
* happen, it would tie up KnownAssignedXids indefinitely, so we protect
* ourselves by pruning the array when a valid list of running XIDs arrives.
@@ -651,7 +651,7 @@ ProcArrayInitRecovery(TransactionId initializedUptoXID)
* Normal case is to go all the way to Ready straight away, though there
* are atypical cases where we need to take it in steps.
*
- * Use the data about running transactions on master to create the initial
+ * Use the data about running transactions on the primary to create the initial
* state of KnownAssignedXids. We also use these records to regularly prune
* KnownAssignedXids because we know it is possible that some transactions
* with FATAL errors fail to write abort records, which could cause eventual
@@ -969,7 +969,7 @@ ProcArrayApplyXidAssignment(TransactionId topxid,
* We can find this out cheaply too.
*
* 3. In Hot Standby mode, we must search the KnownAssignedXids list to see
- * if the Xid is running on the master.
+ * if the Xid is running on the primary.
*
* 4. Search the SubTrans tree to find the Xid's topmost parent, and then see
* if that is running according to PGXACT or KnownAssignedXids. This is the
@@ -1198,7 +1198,7 @@ TransactionIdIsInProgress(TransactionId xid)
* TransactionIdIsActive -- is xid the top-level XID of an active backend?
*
* This differs from TransactionIdIsInProgress in that it ignores prepared
- * transactions, as well as transactions running on the master if we're in
+ * transactions, as well as transactions running on the primary if we're in
* hot standby. Also, we ignore subtransactions since that's not needed
* for current uses.
*/
@@ -1289,7 +1289,7 @@ TransactionIdIsActive(TransactionId xid)
* Nonetheless it is safe to vacuum a table in the current database with the
* first result. There are also replication-related effects: a walsender
* process can set its xmin based on transactions that are no longer running
- * in the master but are still being replayed on the standby, thus possibly
+ * on the primary but are still being replayed on the standby, thus possibly
* making the GetOldestXmin reading go backwards. In this case there is a
* possibility that we lose data that the standby would like to have, but
* unless the standby uses a replication slot to make its xmin persistent
@@ -1404,7 +1404,7 @@ GetOldestXmin(Relation rel, int flags)
*
* vacuum_defer_cleanup_age provides some additional "slop" for the
* benefit of hot standby queries on standby servers. This is quick
- * and dirty, and perhaps not all that useful unless the master has a
+ * and dirty, and perhaps not all that useful unless the primary has a
* predictable transaction rate, but it offers some protection when
* there's no walsender connection. Note that we are assuming
* vacuum_defer_cleanup_age isn't large enough to cause wraparound ---
@@ -3244,7 +3244,7 @@ DisplayXidCache(void)
/*
* In Hot Standby mode, we maintain a list of transactions that are (or were)
- * running in the master at the current point in WAL. These XIDs must be
+ * running on the primary at the current point in WAL. These XIDs must be
* treated as running by standby transactions, even though they are not in
* the standby server's PGXACT array.
*
@@ -3264,7 +3264,7 @@ DisplayXidCache(void)
* links are *not* maintained (which does not affect visibility).
*
* We have room in KnownAssignedXids and in snapshots to hold maxProcs *
- * (1 + PGPROC_MAX_CACHED_SUBXIDS) XIDs, so every master transaction must
+ * (1 + PGPROC_MAX_CACHED_SUBXIDS) XIDs, so every primary transaction must
* report its subtransaction XIDs in a WAL XLOG_XACT_ASSIGNMENT record at
* least every PGPROC_MAX_CACHED_SUBXIDS. When we receive one of these
* records, we mark the subXIDs as children of the top XID in pg_subtrans,
@@ -3439,7 +3439,7 @@ ExpireOldKnownAssignedTransactionIds(TransactionId xid)
* order, to be exact --- to allow binary search for specific XIDs. Note:
* in general TransactionIdPrecedes would not provide a total order, but
* we know that the entries present at any instant should not extend across
- * a large enough fraction of XID space to wrap around (the master would
+ * a large enough fraction of XID space to wrap around (the primary would
* shut down for fear of XID wrap long before that happens). So it's OK to
* use TransactionIdPrecedes as a binary-search comparator.
*
diff --git a/src/backend/storage/ipc/standby.c b/src/backend/storage/ipc/standby.c
index 9e0d5ec257f..f5229839cfc 100644
--- a/src/backend/storage/ipc/standby.c
+++ b/src/backend/storage/ipc/standby.c
@@ -61,7 +61,7 @@ typedef struct RecoveryLockListsEntry
/*
* InitRecoveryTransactionEnvironment
- * Initialize tracking of in-progress transactions in master
+ * Initialize tracking of our primary's in-progress transactions.
*
* We need to issue shared invalidations and hold locks. Holding locks
* means others may want to wait on us, so we need to make a lock table
diff --git a/src/backend/storage/lmgr/README b/src/backend/storage/lmgr/README
index 13eb1cc785a..c96cc7b7c5f 100644
--- a/src/backend/storage/lmgr/README
+++ b/src/backend/storage/lmgr/README
@@ -725,7 +725,7 @@ Deadlocks involving AccessExclusiveLocks are not possible, so we need
not be concerned that a user initiated deadlock can prevent recovery from
progressing.
-AccessExclusiveLocks on the primary or master node generate WAL records
+AccessExclusiveLocks on the primary node generate WAL records
that are then applied by the Startup process. Locks are released at end
of transaction just as they are in normal processing. These locks are
held by the Startup process, acting as a proxy for the backends that
diff --git a/src/backend/storage/page/README b/src/backend/storage/page/README
index 4e45bd92abc..e30d7ac59ad 100644
--- a/src/backend/storage/page/README
+++ b/src/backend/storage/page/README
@@ -61,4 +61,4 @@ recovery must not dirty the page if the buffer is not already dirty, when
checksums are enabled. Systems in Hot-Standby mode may benefit from hint bits
being set, but with checksums enabled, a page cannot be dirtied after setting a
hint bit (due to the torn page risk). So, it must wait for full-page images
-containing the hint bit updates to arrive from the master.
+containing the hint bit updates to arrive from the primary.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 75fc6f11d6a..1359caa08f6 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -708,8 +708,8 @@ const char *const config_group_names[] =
gettext_noop("Replication"),
/* REPLICATION_SENDING */
gettext_noop("Replication / Sending Servers"),
- /* REPLICATION_MASTER */
- gettext_noop("Replication / Master Server"),
+ /* REPLICATION_PRIMARY */
+ gettext_noop("Replication / Primary Server"),
/* REPLICATION_STANDBY */
gettext_noop("Replication / Standby Servers"),
/* REPLICATION_SUBSCRIBERS */
@@ -2549,7 +2549,7 @@ static struct config_int ConfigureNamesInt[] =
},
{
- {"vacuum_defer_cleanup_age", PGC_SIGHUP, REPLICATION_MASTER,
+ {"vacuum_defer_cleanup_age", PGC_SIGHUP, REPLICATION_PRIMARY,
gettext_noop("Number of transactions by which VACUUM and HOT cleanup should be deferred, if any."),
NULL
},
@@ -4292,7 +4292,7 @@ static struct config_string ConfigureNamesString[] =
},
{
- {"synchronous_standby_names", PGC_SIGHUP, REPLICATION_MASTER,
+ {"synchronous_standby_names", PGC_SIGHUP, REPLICATION_PRIMARY,
gettext_noop("Number of synchronous standbys and list of names of potential synchronous ones."),
NULL,
GUC_LIST_INPUT
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 3a25287a391..e6649de649b 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -284,7 +284,7 @@
# - Sending Servers -
-# Set these on the master and on any standby that will send replication data.
+# Set these on the primary and on any standby that will send replication data.
#max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
@@ -297,7 +297,7 @@
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
-# - Master Server -
+# - Primary Server -
# These settings are ignored on a standby server.
@@ -309,7 +309,7 @@
# - Standby Servers -
-# These settings are ignored on a master server.
+# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
@@ -329,7 +329,7 @@
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
- # communication from master
+ # communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
diff --git a/src/bin/pg_basebackup/pg_recvlogical.c b/src/bin/pg_basebackup/pg_recvlogical.c
index cc8616844ba..a4e0d6aeb29 100644
--- a/src/bin/pg_basebackup/pg_recvlogical.c
+++ b/src/bin/pg_basebackup/pg_recvlogical.c
@@ -286,7 +286,7 @@ StreamLogicalLog(void)
}
/*
- * Potentially send a status message to the master
+ * Potentially send a status message to the primary.
*/
now = feGetCurrentTimestamp();
diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c
index 62a342f77c9..d3f99d89c5c 100644
--- a/src/bin/pg_basebackup/receivelog.c
+++ b/src/bin/pg_basebackup/receivelog.c
@@ -417,7 +417,7 @@ CheckServerVersionForStreaming(PGconn *conn)
* race-y since a signal received while busy won't interrupt the wait.
*
* standby_message_timeout controls how often we send a message
- * back to the master letting it know our progress, in milliseconds.
+ * back to the primary letting it know our progress, in milliseconds.
* Zero means no messages are sent.
* This message will only contain the write location, and never
* flush or replay.
@@ -776,7 +776,7 @@ HandleCopyStream(PGconn *conn, StreamCtl *stream,
}
/*
- * Potentially send a status message to the master
+ * Potentially send a status message to the primary
*/
if (still_sending && stream->standby_message_timeout > 0 &&
feTimestampDifferenceExceeds(last_status, now,
diff --git a/src/bin/pg_rewind/copy_fetch.c b/src/bin/pg_rewind/copy_fetch.c
index 223c32628dd..1edab5f1867 100644
--- a/src/bin/pg_rewind/copy_fetch.c
+++ b/src/bin/pg_rewind/copy_fetch.c
@@ -76,7 +76,7 @@ recurse_dir(const char *datadir, const char *parentpath,
if (errno == ENOENT)
{
/*
- * File doesn't exist anymore. This is ok, if the new master
+ * File doesn't exist anymore. This is ok, if the new primary
* is running and the file was just removed. If it was a data
* file, there should be a WAL record of the removal. If it
* was something else, it couldn't have been anyway.
diff --git a/src/bin/pg_rewind/filemap.c b/src/bin/pg_rewind/filemap.c
index 36a2d623415..1879257b66a 100644
--- a/src/bin/pg_rewind/filemap.c
+++ b/src/bin/pg_rewind/filemap.c
@@ -62,7 +62,7 @@ static const char *excludeDirContents[] =
/*
* It is generally not useful to backup the contents of this directory
- * even if the intention is to restore to another master. See backup.sgml
+ * even if the intention is to restore to another primary. See backup.sgml
* for a more detailed description.
*/
"pg_replslot",
diff --git a/src/bin/pg_rewind/parsexlog.c b/src/bin/pg_rewind/parsexlog.c
index bc6f9769941..2325fb5d302 100644
--- a/src/bin/pg_rewind/parsexlog.c
+++ b/src/bin/pg_rewind/parsexlog.c
@@ -206,7 +206,7 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,
/*
* Check if it is a checkpoint record. This checkpoint record needs to
* be the latest checkpoint before WAL forked and not the checkpoint
- * where the master has been stopped to be rewinded.
+ * where the primary has been stopped to be rewinded.
*/
info = XLogRecGetInfo(xlogreader) & ~XLR_INFO_MASK;
if (searchptr < forkptr &&
--
2.25.0.114.g5b0ca878e0
v1-0003-code-s-master-leader.patchtext/x-diff; charset=us-asciiDownload
From d5e27cc1c3e3b64340c744bd535404cd932d7a16 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 14 Jun 2020 14:22:47 -0700
Subject: [PATCH v1 3/8] code: s/master/leader/
This mostly makes the language consistent what we already use
externally.
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/include/catalog/pg_proc.h | 6 +-
src/include/libpq/pqmq.h | 2 +-
src/include/storage/backendid.h | 4 +-
src/backend/access/transam/parallel.c | 36 ++++-----
src/backend/access/transam/xact.c | 12 +--
src/backend/executor/execGrouping.c | 2 +-
src/backend/libpq/pqmq.c | 18 ++---
src/backend/optimizer/path/costsize.c | 2 +-
src/backend/optimizer/util/clauses.c | 4 +-
src/backend/utils/init/globals.c | 2 +-
src/backend/utils/misc/guc.c | 2 +-
src/bin/pg_dump/parallel.c | 108 +++++++++++++-------------
src/bin/pg_dump/parallel.h | 2 +-
src/bin/pg_dump/pg_backup_archiver.c | 10 +--
src/bin/pg_dump/pg_backup_directory.c | 2 +-
src/bin/pg_dump/pg_dump.c | 2 +-
contrib/pg_prewarm/autoprewarm.c | 18 ++---
doc/src/sgml/ref/pg_dump.sgml | 8 +-
18 files changed, 120 insertions(+), 120 deletions(-)
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index ee3959da09f..65e8c9f0546 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -157,10 +157,10 @@ typedef FormData_pg_proc *Form_pg_proc;
/*
* Symbolic values for proparallel column: these indicate whether a function
* can be safely be run in a parallel backend, during parallelism but
- * necessarily in the master, or only in non-parallel mode.
+ * necessarily in the leader, or only in non-parallel mode.
*/
-#define PROPARALLEL_SAFE 's' /* can run in worker or master */
-#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel master only */
+#define PROPARALLEL_SAFE 's' /* can run in worker or leader */
+#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
/*
diff --git a/src/include/libpq/pqmq.h b/src/include/libpq/pqmq.h
index 6a3ccba97ac..ac0eb789d84 100644
--- a/src/include/libpq/pqmq.h
+++ b/src/include/libpq/pqmq.h
@@ -17,7 +17,7 @@
#include "storage/shm_mq.h"
extern void pq_redirect_to_shm_mq(dsm_segment *seg, shm_mq_handle *mqh);
-extern void pq_set_parallel_master(pid_t pid, BackendId backend_id);
+extern void pq_set_parallel_leader(pid_t pid, BackendId backend_id);
extern void pq_parse_errornotice(StringInfo str, ErrorData *edata);
diff --git a/src/include/storage/backendid.h b/src/include/storage/backendid.h
index 0c776a3e6cb..e5fe0e724c8 100644
--- a/src/include/storage/backendid.h
+++ b/src/include/storage/backendid.h
@@ -25,13 +25,13 @@ typedef int BackendId; /* unique currently active backend identifier */
extern PGDLLIMPORT BackendId MyBackendId; /* backend id of this backend */
/* backend id of our parallel session leader, or InvalidBackendId if none */
-extern PGDLLIMPORT BackendId ParallelMasterBackendId;
+extern PGDLLIMPORT BackendId ParallelLeaderBackendId;
/*
* The BackendId to use for our session's temp relations is normally our own,
* but parallel workers should use their leader's ID.
*/
#define BackendIdForTempRelations() \
- (ParallelMasterBackendId == InvalidBackendId ? MyBackendId : ParallelMasterBackendId)
+ (ParallelLeaderBackendId == InvalidBackendId ? MyBackendId : ParallelLeaderBackendId)
#endif /* BACKENDID_H */
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 14a86900198..b0426960c78 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -89,9 +89,9 @@ typedef struct FixedParallelState
Oid temp_toast_namespace_id;
int sec_context;
bool is_superuser;
- PGPROC *parallel_master_pgproc;
- pid_t parallel_master_pid;
- BackendId parallel_master_backend_id;
+ PGPROC *parallel_leader_pgproc;
+ pid_t parallel_leader_pid;
+ BackendId parallel_leader_backend_id;
TimestampTz xact_ts;
TimestampTz stmt_ts;
SerializableXactHandle serializable_xact_handle;
@@ -124,7 +124,7 @@ static FixedParallelState *MyFixedParallelState;
static dlist_head pcxt_list = DLIST_STATIC_INIT(pcxt_list);
/* Backend-local copy of data from FixedParallelState. */
-static pid_t ParallelMasterPid;
+static pid_t ParallelLeaderPid;
/*
* List of internal parallel worker entry points. We need this for
@@ -323,9 +323,9 @@ InitializeParallelDSM(ParallelContext *pcxt)
GetUserIdAndSecContext(&fps->current_user_id, &fps->sec_context);
GetTempNamespaceState(&fps->temp_namespace_id,
&fps->temp_toast_namespace_id);
- fps->parallel_master_pgproc = MyProc;
- fps->parallel_master_pid = MyProcPid;
- fps->parallel_master_backend_id = MyBackendId;
+ fps->parallel_leader_pgproc = MyProc;
+ fps->parallel_leader_pid = MyProcPid;
+ fps->parallel_leader_backend_id = MyBackendId;
fps->xact_ts = GetCurrentTransactionStartTimestamp();
fps->stmt_ts = GetCurrentStatementStartTimestamp();
fps->serializable_xact_handle = ShareSerializableXact();
@@ -857,8 +857,8 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt)
*
* This function ensures that workers have been completely shutdown. The
* difference between WaitForParallelWorkersToFinish and this function is
- * that former just ensures that last message sent by worker backend is
- * received by master backend whereas this ensures the complete shutdown.
+ * that the former just ensures that last message sent by a worker backend is
+ * received by the leader backend whereas this ensures the complete shutdown.
*/
static void
WaitForParallelWorkersToExit(ParallelContext *pcxt)
@@ -1302,8 +1302,8 @@ ParallelWorkerMain(Datum main_arg)
MyFixedParallelState = fps;
/* Arrange to signal the leader if we exit. */
- ParallelMasterPid = fps->parallel_master_pid;
- ParallelMasterBackendId = fps->parallel_master_backend_id;
+ ParallelLeaderPid = fps->parallel_leader_pid;
+ ParallelLeaderBackendId = fps->parallel_leader_backend_id;
on_shmem_exit(ParallelWorkerShutdown, (Datum) 0);
/*
@@ -1318,8 +1318,8 @@ ParallelWorkerMain(Datum main_arg)
shm_mq_set_sender(mq, MyProc);
mqh = shm_mq_attach(mq, seg, NULL);
pq_redirect_to_shm_mq(seg, mqh);
- pq_set_parallel_master(fps->parallel_master_pid,
- fps->parallel_master_backend_id);
+ pq_set_parallel_leader(fps->parallel_leader_pid,
+ fps->parallel_leader_backend_id);
/*
* Send a BackendKeyData message to the process that initiated parallelism
@@ -1347,8 +1347,8 @@ ParallelWorkerMain(Datum main_arg)
* deadlock. (If we can't join the lock group, the leader has gone away,
* so just exit quietly.)
*/
- if (!BecomeLockGroupMember(fps->parallel_master_pgproc,
- fps->parallel_master_pid))
+ if (!BecomeLockGroupMember(fps->parallel_leader_pgproc,
+ fps->parallel_leader_pid))
return;
/*
@@ -1410,7 +1410,7 @@ ParallelWorkerMain(Datum main_arg)
/* Restore transaction snapshot. */
tsnapspace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_SNAPSHOT, false);
RestoreTransactionSnapshot(RestoreSnapshot(tsnapspace),
- fps->parallel_master_pgproc);
+ fps->parallel_leader_pgproc);
/* Restore active snapshot. */
asnapspace = shm_toc_lookup(toc, PARALLEL_KEY_ACTIVE_SNAPSHOT, false);
@@ -1510,9 +1510,9 @@ ParallelWorkerReportLastRecEnd(XLogRecPtr last_xlog_end)
static void
ParallelWorkerShutdown(int code, Datum arg)
{
- SendProcSignal(ParallelMasterPid,
+ SendProcSignal(ParallelLeaderPid,
PROCSIG_PARALLEL_MESSAGE,
- ParallelMasterBackendId);
+ ParallelLeaderBackendId);
}
/*
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index cd30b62d365..89cfe4ba1a0 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -750,7 +750,7 @@ GetCurrentCommandId(bool used)
{
/*
* Forbid setting currentCommandIdUsed in a parallel worker, because
- * we have no provision for communicating this back to the master. We
+ * we have no provision for communicating this back to the leader. We
* could relax this restriction when currentCommandIdUsed was already
* true at the start of the parallel operation.
*/
@@ -987,7 +987,7 @@ ExitParallelMode(void)
/*
* IsInParallelMode
*
- * Are we in a parallel operation, as either the master or a worker? Check
+ * Are we in a parallel operation, as either the leader or a worker? Check
* this to prohibit operations that change backend-local state expected to
* match across all workers. Mere caches usually don't require such a
* restriction. State modified in a strict push/pop fashion, such as the
@@ -2162,13 +2162,13 @@ CommitTransaction(void)
else
{
/*
- * We must not mark our XID committed; the parallel master is
+ * We must not mark our XID committed; the parallel leader is
* responsible for that.
*/
latestXid = InvalidTransactionId;
/*
- * Make sure the master will know about any WAL we wrote before it
+ * Make sure the leader will know about any WAL we wrote before it
* commits.
*/
ParallelWorkerReportLastRecEnd(XactLastRecEnd);
@@ -2697,7 +2697,7 @@ AbortTransaction(void)
latestXid = InvalidTransactionId;
/*
- * Since the parallel master won't get our value of XactLastRecEnd in
+ * Since the parallel leader won't get our value of XactLastRecEnd in
* this case, we nudge WAL-writer ourselves in this case. See related
* comments in RecordTransactionAbort for why this matters.
*/
@@ -4486,7 +4486,7 @@ RollbackAndReleaseCurrentSubTransaction(void)
/*
* Unlike ReleaseCurrentSubTransaction(), this is nominally permitted
- * during parallel operations. That's because we may be in the master,
+ * during parallel operations. That's because we may be in the leader,
* recovering from an error thrown while we were in parallel mode. We
* won't reach here in a worker, because BeginInternalSubTransaction()
* will have failed.
diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c
index 8be36ca7634..019b87df21e 100644
--- a/src/backend/executor/execGrouping.c
+++ b/src/backend/executor/execGrouping.c
@@ -190,7 +190,7 @@ BuildTupleHashTableExt(PlanState *parent,
hashtable->cur_eq_func = NULL;
/*
- * If parallelism is in use, even if the master backend is performing the
+ * If parallelism is in use, even if the leader backend is performing the
* scan itself, we don't want to create the hashtable exactly the same way
* in all workers. As hashtables are iterated over in keyspace-order,
* doing so in all processes in the same way is likely to lead to
diff --git a/src/backend/libpq/pqmq.c b/src/backend/libpq/pqmq.c
index 743d24cee5c..f51d935daf8 100644
--- a/src/backend/libpq/pqmq.c
+++ b/src/backend/libpq/pqmq.c
@@ -23,8 +23,8 @@
static shm_mq_handle *pq_mq_handle;
static bool pq_mq_busy = false;
-static pid_t pq_mq_parallel_master_pid = 0;
-static pid_t pq_mq_parallel_master_backend_id = InvalidBackendId;
+static pid_t pq_mq_parallel_leader_pid = 0;
+static pid_t pq_mq_parallel_leader_backend_id = InvalidBackendId;
static void pq_cleanup_redirect_to_shm_mq(dsm_segment *seg, Datum arg);
static void mq_comm_reset(void);
@@ -73,15 +73,15 @@ pq_cleanup_redirect_to_shm_mq(dsm_segment *seg, Datum arg)
}
/*
- * Arrange to SendProcSignal() to the parallel master each time we transmit
+ * Arrange to SendProcSignal() to the parallel leader each time we transmit
* message data via the shm_mq.
*/
void
-pq_set_parallel_master(pid_t pid, BackendId backend_id)
+pq_set_parallel_leader(pid_t pid, BackendId backend_id)
{
Assert(PqCommMethods == &PqCommMqMethods);
- pq_mq_parallel_master_pid = pid;
- pq_mq_parallel_master_backend_id = backend_id;
+ pq_mq_parallel_leader_pid = pid;
+ pq_mq_parallel_leader_backend_id = backend_id;
}
static void
@@ -160,10 +160,10 @@ mq_putmessage(char msgtype, const char *s, size_t len)
{
result = shm_mq_sendv(pq_mq_handle, iov, 2, true);
- if (pq_mq_parallel_master_pid != 0)
- SendProcSignal(pq_mq_parallel_master_pid,
+ if (pq_mq_parallel_leader_pid != 0)
+ SendProcSignal(pq_mq_parallel_leader_pid,
PROCSIG_PARALLEL_MESSAGE,
- pq_mq_parallel_master_backend_id);
+ pq_mq_parallel_leader_backend_id);
if (result != SHM_MQ_WOULD_BLOCK)
break;
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 4ff3c7a2fd3..086e45e078a 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -11,7 +11,7 @@
* cpu_tuple_cost Cost of typical CPU time to process a tuple
* cpu_index_tuple_cost Cost of typical CPU time to process an index tuple
* cpu_operator_cost Cost of CPU time to execute an operator or function
- * parallel_tuple_cost Cost of CPU time to pass a tuple from worker to master backend
+ * parallel_tuple_cost Cost of CPU time to pass a tuple from worker to leader backend
* parallel_setup_cost Cost of setting up shared memory for parallelism
*
* We expect that the kernel will typically do some amount of read-ahead
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 0c6fe0115a1..e04b1440723 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -1028,8 +1028,8 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
* We can't pass Params to workers at the moment either, so they are also
* parallel-restricted, unless they are PARAM_EXTERN Params or are
* PARAM_EXEC Params listed in safe_param_ids, meaning they could be
- * either generated within the worker or can be computed in master and
- * then their value can be passed to the worker.
+ * either generated within workers or can be computed by the leader and
+ * then their value can be passed to workers.
*/
else if (IsA(node, Param))
{
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index eb196444198..8258819d78a 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -80,7 +80,7 @@ char postgres_exec_path[MAXPGPATH]; /* full path to backend */
BackendId MyBackendId = InvalidBackendId;
-BackendId ParallelMasterBackendId = InvalidBackendId;
+BackendId ParallelLeaderBackendId = InvalidBackendId;
Oid MyDatabaseId = InvalidOid;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 1359caa08f6..561d278e596 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3448,7 +3448,7 @@ static struct config_real ConfigureNamesReal[] =
{
{"parallel_tuple_cost", PGC_USERSET, QUERY_TUNING_COST,
gettext_noop("Sets the planner's estimate of the cost of "
- "passing each tuple (row) from worker to master backend."),
+ "passing each tuple (row) from worker to leader backend."),
NULL,
GUC_EXPLAIN
},
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index c25e3f7a888..f0587f41e49 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -16,20 +16,20 @@
/*
* Parallel operation works like this:
*
- * The original, master process calls ParallelBackupStart(), which forks off
+ * The original, leader process calls ParallelBackupStart(), which forks off
* the desired number of worker processes, which each enter WaitForCommands().
*
- * The master process dispatches an individual work item to one of the worker
+ * The leader process dispatches an individual work item to one of the worker
* processes in DispatchJobForTocEntry(). We send a command string such as
* "DUMP 1234" or "RESTORE 1234", where 1234 is the TocEntry ID.
* The worker process receives and decodes the command and passes it to the
* routine pointed to by AH->WorkerJobDumpPtr or AH->WorkerJobRestorePtr,
* which are routines of the current archive format. That routine performs
* the required action (dump or restore) and returns an integer status code.
- * This is passed back to the master where we pass it to the
+ * This is passed back to the leader where we pass it to the
* ParallelCompletionPtr callback function that was passed to
* DispatchJobForTocEntry(). The callback function does state updating
- * for the master control logic in pg_backup_archiver.c.
+ * for the leader control logic in pg_backup_archiver.c.
*
* In principle additional archive-format-specific information might be needed
* in commands or worker status responses, but so far that hasn't proved
@@ -40,7 +40,7 @@
* threads in the same process. To avoid problems, they work with cloned
* copies of the Archive data structure; see RunWorker().)
*
- * In the master process, the workerStatus field for each worker has one of
+ * In the leader process, the workerStatus field for each worker has one of
* the following values:
* WRKR_NOT_STARTED: we've not yet forked this worker
* WRKR_IDLE: it's waiting for a command
@@ -88,8 +88,8 @@ typedef enum
/*
* Private per-parallel-worker state (typedef for this is in parallel.h).
*
- * Much of this is valid only in the master process (or, on Windows, should
- * be touched only by the master thread). But the AH field should be touched
+ * Much of this is valid only in the leader process (or, on Windows, should
+ * be touched only by the leader thread). But the AH field should be touched
* only by workers. The pipe descriptors are valid everywhere.
*/
struct ParallelSlot
@@ -102,7 +102,7 @@ struct ParallelSlot
ArchiveHandle *AH; /* Archive data worker is using */
- int pipeRead; /* master's end of the pipes */
+ int pipeRead; /* leader's end of the pipes */
int pipeWrite;
int pipeRevRead; /* child's end of the pipes */
int pipeRevWrite;
@@ -124,7 +124,7 @@ struct ParallelSlot
*/
typedef struct
{
- ArchiveHandle *AH; /* master database connection */
+ ArchiveHandle *AH; /* leader database connection */
ParallelSlot *slot; /* this worker's parallel slot */
} WorkerInfo;
@@ -157,9 +157,9 @@ static ShutdownInformation shutdown_info;
* State info for signal handling.
* We assume signal_info initializes to zeroes.
*
- * On Unix, myAH is the master DB connection in the master process, and the
+ * On Unix, myAH is the leader DB connection in the leader process, and the
* worker's own connection in worker processes. On Windows, we have only one
- * instance of signal_info, so myAH is the master connection and the worker
+ * instance of signal_info, so myAH is the leader connection and the worker
* connections must be dug out of pstate->parallelSlot[].
*/
typedef struct DumpSignalInformation
@@ -216,8 +216,8 @@ static void lockTableForWorker(ArchiveHandle *AH, TocEntry *te);
static void WaitForCommands(ArchiveHandle *AH, int pipefd[2]);
static bool ListenToWorkers(ArchiveHandle *AH, ParallelState *pstate,
bool do_wait);
-static char *getMessageFromMaster(int pipefd[2]);
-static void sendMessageToMaster(int pipefd[2], const char *str);
+static char *getMessageFromLeader(int pipefd[2]);
+static void sendMessageToLeader(int pipefd[2], const char *str);
static int select_loop(int maxFd, fd_set *workerset);
static char *getMessageFromWorker(ParallelState *pstate,
bool do_wait, int *worker);
@@ -277,7 +277,7 @@ init_parallel_dump_utils(void)
/*
* Find the ParallelSlot for the current worker process or thread.
*
- * Returns NULL if no matching slot is found (this implies we're the master).
+ * Returns NULL if no matching slot is found (this implies we're the leader).
*/
static ParallelSlot *
GetMyPSlot(ParallelState *pstate)
@@ -367,7 +367,7 @@ archive_close_connection(int code, void *arg)
if (!slot)
{
/*
- * We're the master. Forcibly shut down workers, then close our
+ * We're the leader. Forcibly shut down workers, then close our
* own database connection, if any.
*/
ShutdownWorkersHard(si->pstate);
@@ -381,7 +381,7 @@ archive_close_connection(int code, void *arg)
* We're a worker. Shut down our own DB connection if any. On
* Windows, we also have to close our communication sockets, to
* emulate what will happen on Unix when the worker process exits.
- * (Without this, if this is a premature exit, the master would
+ * (Without this, if this is a premature exit, the leader would
* fail to detect it because there would be no EOF condition on
* the other end of the pipe.)
*/
@@ -396,7 +396,7 @@ archive_close_connection(int code, void *arg)
}
else
{
- /* Non-parallel operation: just kill the master DB connection */
+ /* Non-parallel operation: just kill the leader DB connection */
if (si->AHX)
DisconnectDatabase(si->AHX);
}
@@ -541,11 +541,11 @@ WaitForTerminatingWorkers(ParallelState *pstate)
*
* In parallel operation on Unix, each process is responsible for canceling
* its own connection (this must be so because nobody else has access to it).
- * Furthermore, the master process should attempt to forward its signal to
+ * Furthermore, the leader process should attempt to forward its signal to
* each child. In simple manual use of pg_dump/pg_restore, forwarding isn't
* needed because typing control-C at the console would deliver SIGINT to
* every member of the terminal process group --- but in other scenarios it
- * might be that only the master gets signaled.
+ * might be that only the leader gets signaled.
*
* On Windows, the cancel handler runs in a separate thread, because that's
* how SetConsoleCtrlHandler works. We make it stop worker threads, send
@@ -576,8 +576,8 @@ sigTermHandler(SIGNAL_ARGS)
pqsignal(SIGQUIT, SIG_IGN);
/*
- * If we're in the master, forward signal to all workers. (It seems best
- * to do this before PQcancel; killing the master transaction will result
+ * If we're in the leader, forward signal to all workers. (It seems best
+ * to do this before PQcancel; killing the leader transaction will result
* in invalid-snapshot errors from active workers, which maybe we can
* quiet by killing workers first.) Ignore any errors.
*/
@@ -601,7 +601,7 @@ sigTermHandler(SIGNAL_ARGS)
/*
* Report we're quitting, using nothing more complicated than write(2).
- * When in parallel operation, only the master process should do this.
+ * When in parallel operation, only the leader process should do this.
*/
if (!signal_info.am_worker)
{
@@ -665,7 +665,7 @@ consoleHandler(DWORD dwCtrlType)
* If in parallel mode, stop worker threads and send QueryCancel to
* their connected backends. The main point of stopping the worker
* threads is to keep them from reporting the query cancels as errors,
- * which would clutter the user's screen. We needn't stop the master
+ * which would clutter the user's screen. We needn't stop the leader
* thread since it won't be doing much anyway. Do this before
* canceling the main transaction, else we might get invalid-snapshot
* errors reported before we can stop the workers. Ignore errors,
@@ -693,7 +693,7 @@ consoleHandler(DWORD dwCtrlType)
}
/*
- * Send QueryCancel to master connection, if enabled. Ignore errors,
+ * Send QueryCancel to leader connection, if enabled. Ignore errors,
* there's not much we can do about them anyway.
*/
if (signal_info.myAH != NULL && signal_info.myAH->connCancel != NULL)
@@ -949,11 +949,11 @@ ParallelBackupStart(ArchiveHandle *AH)
shutdown_info.pstate = pstate;
/*
- * Temporarily disable query cancellation on the master connection. This
+ * Temporarily disable query cancellation on the leader connection. This
* ensures that child processes won't inherit valid AH->connCancel
- * settings and thus won't try to issue cancels against the master's
+ * settings and thus won't try to issue cancels against the leader's
* connection. No harm is done if we fail while it's disabled, because
- * the master connection is idle at this point anyway.
+ * the leader connection is idle at this point anyway.
*/
set_archive_cancel_info(AH, NULL);
@@ -977,7 +977,7 @@ ParallelBackupStart(ArchiveHandle *AH)
if (pgpipe(pipeMW) < 0 || pgpipe(pipeWM) < 0)
fatal("could not create communication channels: %m");
- /* master's ends of the pipes */
+ /* leader's ends of the pipes */
slot->pipeRead = pipeWM[PIPE_READ];
slot->pipeWrite = pipeMW[PIPE_WRITE];
/* child's ends of the pipes */
@@ -1008,13 +1008,13 @@ ParallelBackupStart(ArchiveHandle *AH)
/* instruct signal handler that we're in a worker now */
signal_info.am_worker = true;
- /* close read end of Worker -> Master */
+ /* close read end of Worker -> Leader */
closesocket(pipeWM[PIPE_READ]);
- /* close write end of Master -> Worker */
+ /* close write end of Leader -> Worker */
closesocket(pipeMW[PIPE_WRITE]);
/*
- * Close all inherited fds for communication of the master with
+ * Close all inherited fds for communication of the leader with
* previously-forked workers.
*/
for (j = 0; j < i; j++)
@@ -1035,19 +1035,19 @@ ParallelBackupStart(ArchiveHandle *AH)
fatal("could not create worker process: %m");
}
- /* In Master after successful fork */
+ /* In Leader after successful fork */
slot->pid = pid;
slot->workerStatus = WRKR_IDLE;
- /* close read end of Master -> Worker */
+ /* close read end of Leader -> Worker */
closesocket(pipeMW[PIPE_READ]);
- /* close write end of Worker -> Master */
+ /* close write end of Worker -> Leader */
closesocket(pipeWM[PIPE_WRITE]);
#endif /* WIN32 */
}
/*
- * Having forked off the workers, disable SIGPIPE so that master isn't
+ * Having forked off the workers, disable SIGPIPE so that leader isn't
* killed if it tries to send a command to a dead worker. We don't want
* the workers to inherit this setting, though.
*/
@@ -1056,7 +1056,7 @@ ParallelBackupStart(ArchiveHandle *AH)
#endif
/*
- * Re-establish query cancellation on the master connection.
+ * Re-establish query cancellation on the leader connection.
*/
set_archive_cancel_info(AH, AH->connection);
@@ -1162,12 +1162,12 @@ parseWorkerCommand(ArchiveHandle *AH, TocEntry **te, T_Action *act,
Assert(*te != NULL);
}
else
- fatal("unrecognized command received from master: \"%s\"",
+ fatal("unrecognized command received from leader: \"%s\"",
msg);
}
/*
- * buildWorkerResponse: format a response string to send to the master.
+ * buildWorkerResponse: format a response string to send to the leader.
*
* The string is built in the caller-supplied buffer of size buflen.
*/
@@ -1299,16 +1299,16 @@ IsEveryWorkerIdle(ParallelState *pstate)
/*
* Acquire lock on a table to be dumped by a worker process.
*
- * The master process is already holding an ACCESS SHARE lock. Ordinarily
+ * The leader process is already holding an ACCESS SHARE lock. Ordinarily
* it's no problem for a worker to get one too, but if anything else besides
* pg_dump is running, there's a possible deadlock:
*
- * 1) Master dumps the schema and locks all tables in ACCESS SHARE mode.
+ * 1) Leader dumps the schema and locks all tables in ACCESS SHARE mode.
* 2) Another process requests an ACCESS EXCLUSIVE lock (which is not granted
- * because the master holds a conflicting ACCESS SHARE lock).
+ * because the leader holds a conflicting ACCESS SHARE lock).
* 3) A worker process also requests an ACCESS SHARE lock to read the table.
* The worker is enqueued behind the ACCESS EXCLUSIVE lock request.
- * 4) Now we have a deadlock, since the master is effectively waiting for
+ * 4) Now we have a deadlock, since the leader is effectively waiting for
* the worker. The server cannot detect that, however.
*
* To prevent an infinite wait, prior to touching a table in a worker, request
@@ -1349,7 +1349,7 @@ lockTableForWorker(ArchiveHandle *AH, TocEntry *te)
/*
* WaitForCommands: main routine for a worker process.
*
- * Read and execute commands from the master until we see EOF on the pipe.
+ * Read and execute commands from the leader until we see EOF on the pipe.
*/
static void
WaitForCommands(ArchiveHandle *AH, int pipefd[2])
@@ -1362,7 +1362,7 @@ WaitForCommands(ArchiveHandle *AH, int pipefd[2])
for (;;)
{
- if (!(command = getMessageFromMaster(pipefd)))
+ if (!(command = getMessageFromLeader(pipefd)))
{
/* EOF, so done */
return;
@@ -1387,10 +1387,10 @@ WaitForCommands(ArchiveHandle *AH, int pipefd[2])
else
Assert(false);
- /* Return status to master */
+ /* Return status to leader */
buildWorkerResponse(AH, te, act, status, buf, sizeof(buf));
- sendMessageToMaster(pipefd, buf);
+ sendMessageToLeader(pipefd, buf);
/* command was pg_malloc'd and we are responsible for free()ing it. */
free(command);
@@ -1464,7 +1464,7 @@ ListenToWorkers(ArchiveHandle *AH, ParallelState *pstate, bool do_wait)
* Any received results are passed to the callback specified to
* DispatchJobForTocEntry.
*
- * This function is executed in the master process.
+ * This function is executed in the leader process.
*/
void
WaitForWorkers(ArchiveHandle *AH, ParallelState *pstate, WFW_WaitOption mode)
@@ -1525,25 +1525,25 @@ WaitForWorkers(ArchiveHandle *AH, ParallelState *pstate, WFW_WaitOption mode)
}
/*
- * Read one command message from the master, blocking if necessary
+ * Read one command message from the leader, blocking if necessary
* until one is available, and return it as a malloc'd string.
* On EOF, return NULL.
*
* This function is executed in worker processes.
*/
static char *
-getMessageFromMaster(int pipefd[2])
+getMessageFromLeader(int pipefd[2])
{
return readMessageFromPipe(pipefd[PIPE_READ]);
}
/*
- * Send a status message to the master.
+ * Send a status message to the leader.
*
* This function is executed in worker processes.
*/
static void
-sendMessageToMaster(int pipefd[2], const char *str)
+sendMessageToLeader(int pipefd[2], const char *str)
{
int len = strlen(str) + 1;
@@ -1592,7 +1592,7 @@ select_loop(int maxFd, fd_set *workerset)
* that's hard to distinguish from the no-data-available case, but for now
* our one caller is okay with that.
*
- * This function is executed in the master process.
+ * This function is executed in the leader process.
*/
static char *
getMessageFromWorker(ParallelState *pstate, bool do_wait, int *worker)
@@ -1657,7 +1657,7 @@ getMessageFromWorker(ParallelState *pstate, bool do_wait, int *worker)
/*
* Send a command message to the specified worker process.
*
- * This function is executed in the master process.
+ * This function is executed in the leader process.
*/
static void
sendMessageToWorker(ParallelState *pstate, int worker, const char *str)
@@ -1688,7 +1688,7 @@ readMessageFromPipe(int fd)
/*
* In theory, if we let piperead() read multiple bytes, it might give us
* back fragments of multiple messages. (That can't actually occur, since
- * neither master nor workers send more than one message without waiting
+ * neither leader nor workers send more than one message without waiting
* for a reply, but we don't wish to assume that here.) For simplicity,
* read a byte at a time until we get the terminating '\0'. This method
* is a bit inefficient, but since this is only used for relatively short
diff --git a/src/bin/pg_dump/parallel.h b/src/bin/pg_dump/parallel.h
index 4f8e627cd5b..a2e98cb87bf 100644
--- a/src/bin/pg_dump/parallel.h
+++ b/src/bin/pg_dump/parallel.h
@@ -18,7 +18,7 @@
#include "pg_backup_archiver.h"
-/* Function to call in master process on completion of a worker task */
+/* Function to call in leader process on completion of a worker task */
typedef void (*ParallelCompletionPtr) (ArchiveHandle *AH,
TocEntry *te,
int status,
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 4c91b9e1bcc..c05a1fd6af0 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -662,7 +662,7 @@ RestoreArchive(Archive *AHX)
restore_toc_entries_parallel(AH, pstate, &pending_list);
ParallelBackupEnd(AH, pstate);
- /* reconnect the master and see if we missed something */
+ /* reconnect the leader and see if we missed something */
restore_toc_entries_postfork(AH, &pending_list);
Assert(AH->connection != NULL);
}
@@ -2393,7 +2393,7 @@ WriteDataChunks(ArchiveHandle *AH, ParallelState *pstate)
if (pstate && pstate->numWorkers > 1)
{
/*
- * In parallel mode, this code runs in the master process. We
+ * In parallel mode, this code runs in the leader process. We
* construct an array of candidate TEs, then sort it into decreasing
* size order, then dispatch each TE to a data-transfer worker. By
* dumping larger tables first, we avoid getting into a situation
@@ -2447,7 +2447,7 @@ WriteDataChunks(ArchiveHandle *AH, ParallelState *pstate)
/*
- * Callback function that's invoked in the master process after a step has
+ * Callback function that's invoked in the leader process after a step has
* been parallel dumped.
*
* We don't need to do anything except check for worker failure.
@@ -4437,7 +4437,7 @@ pop_next_work_item(ArchiveHandle *AH, ParallelReadyList *ready_list,
* this is run in the worker, i.e. in a thread (Windows) or a separate process
* (everything else). A worker process executes several such work items during
* a parallel backup or restore. Once we terminate here and report back that
- * our work is finished, the master process will assign us a new work item.
+ * our work is finished, the leader process will assign us a new work item.
*/
int
parallel_restore(ArchiveHandle *AH, TocEntry *te)
@@ -4457,7 +4457,7 @@ parallel_restore(ArchiveHandle *AH, TocEntry *te)
/*
- * Callback function that's invoked in the master process after a step has
+ * Callback function that's invoked in the leader process after a step has
* been parallel restored.
*
* Update status and reduce the dependency count of any dependent items.
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index ac81151acc9..5bb0a464261 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -789,7 +789,7 @@ _Clone(ArchiveHandle *AH)
*/
/*
- * We also don't copy the ParallelState pointer (pstate), only the master
+ * We also don't copy the ParallelState pointer (pstate), only the leader
* process ever writes to it.
*/
}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 89d598f8568..6cf35acc54d 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1238,7 +1238,7 @@ static void
setupDumpWorker(Archive *AH)
{
/*
- * We want to re-select all the same values the master connection is
+ * We want to re-select all the same values the leader connection is
* using. We'll have inherited directly-usable values in
* AH->sync_snapshot_id and AH->use_role, but we need to translate the
* inherited encoding value back to a string to pass to setup_connection.
diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c
index 6cc8634a841..d797095458a 100644
--- a/contrib/pg_prewarm/autoprewarm.c
+++ b/contrib/pg_prewarm/autoprewarm.c
@@ -11,7 +11,7 @@
* pages from a relation that is in the process of being dropped.
*
* While prewarming, autoprewarm will use two workers. There's a
- * master worker that reads and sorts the list of blocks to be
+ * leader worker that reads and sorts the list of blocks to be
* prewarmed and then launches a per-database worker for each
* relevant database in turn. The former keeps running after the
* initial prewarm is complete to update the dump file periodically.
@@ -88,7 +88,7 @@ PG_FUNCTION_INFO_V1(autoprewarm_dump_now);
static void apw_load_buffers(void);
static int apw_dump_now(bool is_bgworker, bool dump_unlogged);
-static void apw_start_master_worker(void);
+static void apw_start_leader_worker(void);
static void apw_start_database_worker(void);
static bool apw_init_shmem(void);
static void apw_detach_shmem(int code, Datum arg);
@@ -146,11 +146,11 @@ _PG_init(void)
/* Register autoprewarm worker, if enabled. */
if (autoprewarm)
- apw_start_master_worker();
+ apw_start_leader_worker();
}
/*
- * Main entry point for the master autoprewarm process. Per-database workers
+ * Main entry point for the leader autoprewarm process. Per-database workers
* have a separate entry point.
*/
void
@@ -716,7 +716,7 @@ autoprewarm_start_worker(PG_FUNCTION_ARGS)
errmsg("autoprewarm worker is already running under PID %lu",
(unsigned long) pid)));
- apw_start_master_worker();
+ apw_start_leader_worker();
PG_RETURN_VOID();
}
@@ -786,10 +786,10 @@ apw_detach_shmem(int code, Datum arg)
}
/*
- * Start autoprewarm master worker process.
+ * Start autoprewarm leader worker process.
*/
static void
-apw_start_master_worker(void)
+apw_start_leader_worker(void)
{
BackgroundWorker worker;
BackgroundWorkerHandle *handle;
@@ -801,8 +801,8 @@ apw_start_master_worker(void)
worker.bgw_start_time = BgWorkerStart_ConsistentState;
strcpy(worker.bgw_library_name, "pg_prewarm");
strcpy(worker.bgw_function_name, "autoprewarm_main");
- strcpy(worker.bgw_name, "autoprewarm master");
- strcpy(worker.bgw_type, "autoprewarm master");
+ strcpy(worker.bgw_name, "autoprewarm leader");
+ strcpy(worker.bgw_type, "autoprewarm leader");
if (process_shared_preload_libraries_in_progress)
{
diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 2f0807e9127..efad5859356 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -332,12 +332,12 @@ PostgreSQL documentation
</para>
<para>
Requesting exclusive locks on database objects while running a parallel dump could
- cause the dump to fail. The reason is that the <application>pg_dump</application> master process
+ cause the dump to fail. The reason is that the <application>pg_dump</application> leader process
requests shared locks on the objects that the worker processes are going to dump later
in order to
make sure that nobody deletes them and makes them go away while the dump is running.
If another client then requests an exclusive lock on a table, that lock will not be
- granted but will be queued waiting for the shared lock of the master process to be
+ granted but will be queued waiting for the shared lock of the leader process to be
released. Consequently any other access to the table will not be granted either and
will queue after the exclusive lock request. This includes the worker process trying
to dump the table. Without any precautions this would be a classic deadlock situation.
@@ -354,14 +354,14 @@ PostgreSQL documentation
for standbys. With this feature, database clients can ensure they see
the same data set even though they use different connections.
<command>pg_dump -j</command> uses multiple database connections; it
- connects to the database once with the master process and once again
+ connects to the database once with the leader process and once again
for each worker job. Without the synchronized snapshot feature, the
different worker jobs wouldn't be guaranteed to see the same data in
each connection, which could lead to an inconsistent backup.
</para>
<para>
If you want to run a parallel dump of a pre-9.2 server, you need to make sure that the
- database content doesn't change from between the time the master connects to the
+ database content doesn't change from between the time the leader connects to the
database until the last worker job has connected to the database. The easiest way to
do this is to halt any data modifying processes (DDL and DML) accessing the database
before starting the backup. You also need to specify the
--
2.25.0.114.g5b0ca878e0
v1-0004-code-s-master-other.patchtext/x-diff; charset=us-asciiDownload
From 8a5abe714b382a166c842d875c74b977230c93ee Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:14:40 -0700
Subject: [PATCH v1 4/8] code: s/master/$other/
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
src/include/nodes/execnodes.h | 2 +-
src/include/utils/snapmgr.h | 2 +-
src/backend/access/transam/xlog.c | 16 +++++++++-------
src/backend/catalog/index.c | 2 +-
src/backend/catalog/toasting.c | 4 ++--
src/backend/commands/vacuum.c | 4 ++--
src/backend/libpq/hba.c | 12 +++++++-----
src/backend/libpq/pqcomm.c | 4 ++--
src/backend/optimizer/plan/planner.c | 4 ++--
src/backend/parser/gram.y | 2 +-
src/backend/snowball/README | 4 ++--
src/backend/utils/time/snapmgr.c | 4 ++--
src/pl/tcl/pltcl.c | 10 +++++-----
13 files changed, 37 insertions(+), 33 deletions(-)
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 98e0072b8ad..651c915f239 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -496,7 +496,7 @@ typedef struct ResultRelInfo
/* ----------------
* EState information
*
- * Master working state for an Executor invocation
+ * Working state for an Executor invocation
* ----------------
*/
typedef struct EState
diff --git a/src/include/utils/snapmgr.h b/src/include/utils/snapmgr.h
index b28d13ce841..ffb4ba3adfb 100644
--- a/src/include/utils/snapmgr.h
+++ b/src/include/utils/snapmgr.h
@@ -153,6 +153,6 @@ extern bool HistoricSnapshotActive(void);
extern Size EstimateSnapshotSpace(Snapshot snapshot);
extern void SerializeSnapshot(Snapshot snapshot, char *start_address);
extern Snapshot RestoreSnapshot(char *start_address);
-extern void RestoreTransactionSnapshot(Snapshot snapshot, void *master_pgproc);
+extern void RestoreTransactionSnapshot(Snapshot snapshot, void *source_pgproc);
#endif /* SNAPMGR_H */
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 2e07ca00774..083ce3e318a 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -562,11 +562,12 @@ typedef struct XLogCtlInsert
char pad[PG_CACHE_LINE_SIZE];
/*
- * fullPageWrites is the master copy used by all backends to determine
- * whether to write full-page to WAL, instead of using process-local one.
- * This is required because, when full_page_writes is changed by SIGHUP,
- * we must WAL-log it before it actually affects WAL-logging by backends.
- * Checkpointer sets at startup or after SIGHUP.
+ * fullPageWrites is the authoritative value used by all backends to
+ * determine whether to write full-page image to WAL. This shared value,
+ * instead of the process-local fullPageWrites, is required because, when
+ * full_page_writes is changed by SIGHUP, we must WAL-log it before it
+ * actually affects WAL-logging by backends. Checkpointer sets at startup
+ * or after SIGHUP.
*
* To read these fields, you must hold an insertion lock. To modify them,
* you must hold ALL the locks.
@@ -8364,8 +8365,9 @@ GetRedoRecPtr(void)
/*
* The possibly not up-to-date copy in XlogCtl is enough. Even if we
- * grabbed a WAL insertion lock to read the master copy, someone might
- * update it just after we've released the lock.
+ * grabbed a WAL insertion lock to read the authoritative value in
+ * Insert->RedoRecPtr, someone might update it just after we've released
+ * the lock.
*/
SpinLockAcquire(&XLogCtl->info_lck);
ptr = XLogCtl->RedoRecPtr;
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index cdc01c49c9f..732cc66532f 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -3782,7 +3782,7 @@ reindex_relation(Oid relid, int flags, int options)
/*
* If the relation has a secondary toast rel, reindex that too while we
- * still hold the lock on the master table.
+ * still hold the lock on the main table.
*/
if ((flags & REINDEX_REL_PROCESS_TOAST) && OidIsValid(toast_relid))
result |= reindex_relation(toast_relid, flags, options);
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 3f7ab8d389b..9d07cdcce06 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -361,8 +361,8 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
table_close(class_rel, RowExclusiveLock);
/*
- * Register dependency from the toast table to the master, so that the
- * toast table will be deleted if the master is. Skip this in bootstrap
+ * Register dependency from the toast table to the main, so that the
+ * toast table will be deleted if the main is. Skip this in bootstrap
* mode.
*/
if (!IsBootstrapProcessingMode())
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index d32de23e626..576c7e63e99 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -1897,7 +1897,7 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)
/*
* If the relation has a secondary toast rel, vacuum that too while we
- * still hold the session lock on the master table. Note however that
+ * still hold the session lock on the main table. Note however that
* "analyze" will not get done on the toast table. This is good, because
* the toaster always uses hardcoded index access and statistics are
* totally unimportant for toast relations.
@@ -1906,7 +1906,7 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)
vacuum_rel(toast_relid, NULL, params);
/*
- * Now release the session-level lock on the master table.
+ * Now release the session-level lock on the main table.
*/
UnlockRelationIdForSession(&onerelid, lmode);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index da5189a4fa0..9d63830553e 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -145,7 +145,7 @@ static List *tokenize_inc_file(List *tokens, const char *outer_filename,
static bool parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
int elevel, char **err_msg);
static bool verify_option_list_length(List *options, const char *optionname,
- List *masters, const char *mastername, int line_num);
+ List *comparelist, const char *comparename, int line_num);
static ArrayType *gethba_options(HbaLine *hba);
static void fill_hba_line(Tuplestorestate *tuple_store, TupleDesc tupdesc,
int lineno, HbaLine *hba, const char *err_msg);
@@ -1648,11 +1648,13 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
static bool
-verify_option_list_length(List *options, const char *optionname, List *masters, const char *mastername, int line_num)
+verify_option_list_length(List *options, const char *optionname,
+ List *comparelist, const char *comparename,
+ int line_num)
{
if (list_length(options) == 0 ||
list_length(options) == 1 ||
- list_length(options) == list_length(masters))
+ list_length(options) == list_length(comparelist))
return true;
ereport(LOG,
@@ -1660,8 +1662,8 @@ verify_option_list_length(List *options, const char *optionname, List *masters,
errmsg("the number of %s (%d) must be 1 or the same as the number of %s (%d)",
optionname,
list_length(options),
- mastername,
- list_length(masters)
+ comparename,
+ list_length(comparelist)
),
errcontext("line %d of configuration file \"%s\"",
line_num, HbaFileName)));
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 7717bb27195..ac986c05056 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -705,8 +705,8 @@ Setup_AF_UNIX(const char *sock_path)
* server port. Set port->sock to the FD of the new connection.
*
* ASSUME: that this doesn't need to be non-blocking because
- * the Postmaster uses select() to tell when the server master
- * socket is ready for accept().
+ * the Postmaster uses select() to tell when the socket is ready for
+ * accept().
*
* RETURNS: STATUS_OK or STATUS_ERROR
*/
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 4131019fc98..050d67ac638 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1748,7 +1748,7 @@ inheritance_planner(PlannerInfo *root)
else
{
/*
- * Put back the final adjusted rtable into the master copy of the
+ * Put back the final adjusted rtable into the original copy of the
* Query. (We mustn't do this if we found no non-excluded children,
* since we never saved an adjusted rtable at all.)
*/
@@ -1757,7 +1757,7 @@ inheritance_planner(PlannerInfo *root)
root->simple_rel_array = save_rel_array;
root->append_rel_array = save_append_rel_array;
- /* Must reconstruct master's simple_rte_array, too */
+ /* Must reconstruct original's simple_rte_array, too */
root->simple_rte_array = (RangeTblEntry **)
palloc0((list_length(final_rtable) + 1) * sizeof(RangeTblEntry *));
rti = 1;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index e669d75a5af..092e5f3879e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -15019,7 +15019,7 @@ ColLabel: IDENT { $$ = $1; }
*
* Make sure that each keyword's category in kwlist.h matches where
* it is listed here. (Someday we may be able to generate these lists and
- * kwlist.h's table from a common master list.)
+ * kwlist.h's table from one source of truth.)
*/
/* "Unreserved" keywords --- available for use as any kind of name.
diff --git a/src/backend/snowball/README b/src/backend/snowball/README
index 92ee16901fe..6948c28b69f 100644
--- a/src/backend/snowball/README
+++ b/src/backend/snowball/README
@@ -22,8 +22,8 @@ At least on Linux, no platform-specific adjustment is needed.
Postgres' files under src/backend/snowball/libstemmer/ and
src/include/snowball/libstemmer/ are taken directly from the Snowball
files, with only some minor adjustments of file inclusions. Note
-that most of these files are in fact derived files, not master source.
-The master sources are in the Snowball language, and are built using
+that most of these files are in fact derived files, not original source.
+The original sources are in the Snowball language, and are built using
the Snowball-to-C compiler that is also part of the Snowball project.
We choose to include the derived files in the PostgreSQL distribution
because most installations will not have the Snowball compiler available.
diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c
index 1c063c592ce..6b6c8571e23 100644
--- a/src/backend/utils/time/snapmgr.c
+++ b/src/backend/utils/time/snapmgr.c
@@ -2222,9 +2222,9 @@ RestoreSnapshot(char *start_address)
* the declaration for PGPROC.
*/
void
-RestoreTransactionSnapshot(Snapshot snapshot, void *master_pgproc)
+RestoreTransactionSnapshot(Snapshot snapshot, void *source_pgproc)
{
- SetTransactionSnapshot(snapshot, NULL, InvalidPid, master_pgproc);
+ SetTransactionSnapshot(snapshot, NULL, InvalidPid, source_pgproc);
}
/*
diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c
index 24d4b57f1a5..f4eabc8f39c 100644
--- a/src/pl/tcl/pltcl.c
+++ b/src/pl/tcl/pltcl.c
@@ -432,9 +432,9 @@ _PG_init(void)
* stdout and stderr on DeleteInterp
************************************************************/
if ((pltcl_hold_interp = Tcl_CreateInterp()) == NULL)
- elog(ERROR, "could not create master Tcl interpreter");
+ elog(ERROR, "could not create dummy Tcl interpreter");
if (Tcl_Init(pltcl_hold_interp) == TCL_ERROR)
- elog(ERROR, "could not initialize master Tcl interpreter");
+ elog(ERROR, "could not initialize dummy Tcl interpreter");
/************************************************************
* Create the hash table for working interpreters
@@ -489,14 +489,14 @@ pltcl_init_interp(pltcl_interp_desc *interp_desc, Oid prolang, bool pltrusted)
char interpname[32];
/************************************************************
- * Create the Tcl interpreter as a slave of pltcl_hold_interp.
+ * Create the Tcl interpreter subsidiary to pltcl_hold_interp.
* Note: Tcl automatically does Tcl_Init in the untrusted case,
* and it's not wanted in the trusted case.
************************************************************/
- snprintf(interpname, sizeof(interpname), "slave_%u", interp_desc->user_id);
+ snprintf(interpname, sizeof(interpname), "subsidiary_%u", interp_desc->user_id);
if ((interp = Tcl_CreateSlave(pltcl_hold_interp, interpname,
pltrusted ? 1 : 0)) == NULL)
- elog(ERROR, "could not create slave Tcl interpreter");
+ elog(ERROR, "could not create subsidiary Tcl interpreter");
/************************************************************
* Initialize the query hash table associated with interpreter
--
2.25.0.114.g5b0ca878e0
v1-0005-docs-s-master-primary.patchtext/x-diff; charset=us-asciiDownload
From f75597474798361053290141731ab5557035098e Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:12:58 -0700
Subject: [PATCH v1 5/8] docs: s/master/primary/
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
doc/src/sgml/amcheck.sgml | 2 +-
doc/src/sgml/backup.sgml | 16 +++----
doc/src/sgml/config.sgml | 42 ++++++++---------
doc/src/sgml/external-projects.sgml | 2 +-
doc/src/sgml/high-availability.sgml | 67 +++++++++++++--------------
doc/src/sgml/libpq.sgml | 2 +-
doc/src/sgml/logical-replication.sgml | 4 +-
doc/src/sgml/monitoring.sgml | 6 +--
doc/src/sgml/mvcc.sgml | 6 +--
doc/src/sgml/pgstandby.sgml | 2 +-
doc/src/sgml/protocol.sgml | 2 +-
doc/src/sgml/ref/pg_basebackup.sgml | 10 ++--
doc/src/sgml/ref/pg_rewind.sgml | 4 +-
doc/src/sgml/runtime.sgml | 4 +-
doc/src/sgml/wal.sgml | 4 +-
15 files changed, 86 insertions(+), 87 deletions(-)
diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml
index 75518a7820a..a9df2c1a9d2 100644
--- a/doc/src/sgml/amcheck.sgml
+++ b/doc/src/sgml/amcheck.sgml
@@ -253,7 +253,7 @@ SET client_min_messages = DEBUG1;
implies that operating system collation rules must never change.
Though rare, updates to operating system collation rules can
cause these issues. More commonly, an inconsistency in the
- collation order between a master server and a standby server is
+ collation order between a primary server and a standby server is
implicated, possibly because the <emphasis>major</emphasis> operating
system version in use is inconsistent. Such inconsistencies will
generally only arise on standby servers, and so can generally
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index bdc9026c629..b9331830f7d 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -964,7 +964,7 @@ SELECT * FROM pg_stop_backup(false, true);
non-exclusive one, but it differs in a few key steps. This type of
backup can only be taken on a primary and does not allow concurrent
backups. Moreover, because it creates a backup label file, as
- described below, it can block automatic restart of the master server
+ described below, it can block automatic restart of the primary server
after a crash. On the other hand, the erroneous removal of this
file from a backup or standby is a common mistake, which can result
in serious data corruption. If it is necessary to use this method,
@@ -1033,9 +1033,9 @@ SELECT pg_start_backup('label', true);
this will result in corruption. Confusion about when it is appropriate
to remove this file is a common cause of data corruption when using this
method; be very certain that you remove the file only on an existing
- master and never when building a standby or restoring a backup, even if
+ primary and never when building a standby or restoring a backup, even if
you are building a standby that will subsequently be promoted to a new
- master.
+ primary.
</para>
</listitem>
<listitem>
@@ -1128,16 +1128,16 @@ SELECT pg_stop_backup();
<para>
It is often a good idea to also omit from the backup the files
within the cluster's <filename>pg_replslot/</filename> directory, so that
- replication slots that exist on the master do not become part of the
+ replication slots that exist on the primary do not become part of the
backup. Otherwise, the subsequent use of the backup to create a standby
may result in indefinite retention of WAL files on the standby, and
- possibly bloat on the master if hot standby feedback is enabled, because
+ possibly bloat on the primary if hot standby feedback is enabled, because
the clients that are using those replication slots will still be connecting
- to and updating the slots on the master, not the standby. Even if the
- backup is only intended for use in creating a new master, copying the
+ to and updating the slots on the primary, not the standby. Even if the
+ backup is only intended for use in creating a new primary, copying the
replication slots isn't expected to be particularly useful, since the
contents of those slots will likely be badly out of date by the time
- the new master comes on line.
+ the new primary comes on line.
</para>
<para>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 783bf7a12ba..0c889bdb49b 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -697,7 +697,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
- same or higher value than on the master server. Otherwise, queries
+ same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@@ -1643,7 +1643,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
- same or higher value than on the master server. Otherwise, queries
+ same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@@ -2259,7 +2259,7 @@ include_dir 'conf.d'
<para>
When running a standby server, you must set this parameter to the
- same or higher value than on the master server. Otherwise, queries
+ same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
@@ -3253,7 +3253,7 @@ include_dir 'conf.d'
<varname>archive_timeout</varname> — it will bloat your archive
storage. <varname>archive_timeout</varname> settings of a minute or so are
usually reasonable. You should consider using streaming replication,
- instead of archiving, if you want data to be copied off the master
+ instead of archiving, if you want data to be copied off the primary
server more quickly than that.
If this value is specified without units, it is taken as seconds.
This parameter can only be set in the
@@ -3678,12 +3678,12 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
These settings control the behavior of the built-in
<firstterm>streaming replication</firstterm> feature (see
<xref linkend="streaming-replication"/>). Servers will be either a
- master or a standby server. Masters can send data, while standbys
+ primary or a standby server. Primaries can send data, while standbys
are always receivers of replicated data. When cascading replication
(see <xref linkend="cascading-replication"/>) is used, standby servers
can also be senders, as well as receivers.
Parameters are mainly for sending and standby servers, though some
- parameters have meaning only on the master server. Settings may vary
+ parameters have meaning only on the primary server. Settings may vary
across the cluster without problems if that is required.
</para>
@@ -3693,10 +3693,10 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para>
These parameters can be set on any server that is
to send replication data to one or more standby servers.
- The master is always a sending server, so these parameters must
- always be set on the master.
+ The primary is always a sending server, so these parameters must
+ always be set on the primary.
The role and meaning of these parameters does not change after a
- standby becomes the master.
+ standby becomes the primary.
</para>
<variablelist>
@@ -3724,7 +3724,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
<para>
When running a standby server, you must set this parameter to the
- same or higher value than on the master server. Otherwise, queries
+ same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
@@ -3855,19 +3855,19 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
</variablelist>
</sect2>
- <sect2 id="runtime-config-replication-master">
- <title>Master Server</title>
+ <sect2 id="runtime-config-replication-primary">
+ <title>Primary Server</title>
<para>
- These parameters can be set on the master/primary server that is
+ These parameters can be set on the primary server that is
to send replication data to one or more standby servers.
Note that in addition to these parameters,
- <xref linkend="guc-wal-level"/> must be set appropriately on the master
+ <xref linkend="guc-wal-level"/> must be set appropriately on the primary
server, and optionally WAL archiving can be enabled as
well (see <xref linkend="runtime-config-wal-archiving"/>).
The values of these parameters on standby servers are irrelevant,
although you may wish to set them there in preparation for the
- possibility of a standby becoming the master.
+ possibility of a standby becoming the primary.
</para>
<variablelist>
@@ -4042,7 +4042,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
<para>
These settings control the behavior of a standby server that is
- to receive replication data. Their values on the master server
+ to receive replication data. Their values on the primary server
are irrelevant.
</para>
@@ -4369,7 +4369,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
of time. For example, if
you set this parameter to <literal>5min</literal>, the standby will
replay each transaction commit only when the system time on the standby
- is at least five minutes past the commit time reported by the master.
+ is at least five minutes past the commit time reported by the primary.
If this value is specified without units, it is taken as milliseconds.
The default is zero, adding no delay.
</para>
@@ -4377,10 +4377,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
It is possible that the replication delay between servers exceeds the
value of this parameter, in which case no delay is added.
Note that the delay is calculated between the WAL time stamp as written
- on master and the current time on the standby. Delays in transfer
+ on primary and the current time on the standby. Delays in transfer
because of network lag or cascading replication configurations
may reduce the actual wait time significantly. If the system
- clocks on master and standby are not synchronized, this may lead to
+ clocks on primary and standby are not synchronized, this may lead to
recovery applying records earlier than expected; but that is not a
major issue because useful settings of this parameter are much larger
than typical time deviations between servers.
@@ -4402,7 +4402,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
except crash recovery.
<varname>hot_standby_feedback</varname> will be delayed by use of this feature
- which could lead to bloat on the master; use both together with care.
+ which could lead to bloat on the primary; use both together with care.
<warning>
<para>
@@ -8998,7 +8998,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
<para>
When running a standby server, you must set this parameter to the
- same or higher value than on the master server. Otherwise, queries
+ same or higher value than on the primary server. Otherwise, queries
will not be allowed in the standby server.
</para>
</listitem>
diff --git a/doc/src/sgml/external-projects.sgml b/doc/src/sgml/external-projects.sgml
index f94e450ef9e..108bbc65d4c 100644
--- a/doc/src/sgml/external-projects.sgml
+++ b/doc/src/sgml/external-projects.sgml
@@ -244,7 +244,7 @@
<productname>PostgreSQL</productname> replication solutions can be developed
externally. For example, <application> <ulink
url="http://www.slony.info">Slony-I</ulink></application> is a popular
- master/standby replication solution that is developed independently
+ primary/standby replication solution that is developed independently
from the core project.
</para>
</sect1>
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 65c3fc62a97..6a9184f314e 100644
--- a/doc/src/sgml/high-availability.sgml
+++ b/doc/src/sgml/high-availability.sgml
@@ -120,7 +120,7 @@
system residing on another computer. The only restriction is that
the mirroring must be done in a way that ensures the standby server
has a consistent copy of the file system — specifically, writes
- to the standby must be done in the same order as those on the master.
+ to the standby must be done in the same order as those on the primary.
<productname>DRBD</productname> is a popular file system replication solution
for Linux.
</para>
@@ -146,7 +146,7 @@ protocol to make nodes agree on a serializable transactional order.
stream of write-ahead log (<acronym>WAL</acronym>)
records. If the main server fails, the standby contains
almost all of the data of the main server, and can be quickly
- made the new master database server. This can be synchronous or
+ made the new primary database server. This can be synchronous or
asynchronous and can only be done for the entire database server.
</para>
<para>
@@ -167,7 +167,7 @@ protocol to make nodes agree on a serializable transactional order.
logical replication constructs a stream of logical data modifications
from the WAL. Logical replication allows the data changes from
individual tables to be replicated. Logical replication doesn't require
- a particular server to be designated as a master or a replica but allows
+ a particular server to be designated as a primary or a replica but allows
data to flow in multiple directions. For more information on logical
replication, see <xref linkend="logical-replication"/>. Through the
logical decoding interface (<xref linkend="logicaldecoding"/>),
@@ -219,9 +219,9 @@ protocol to make nodes agree on a serializable transactional order.
this is unacceptable, either the middleware or the application
must query such values from a single server and then use those
values in write queries. Another option is to use this replication
- option with a traditional master-standby setup, i.e. data modification
- queries are sent only to the master and are propagated to the
- standby servers via master-standby replication, not by the replication
+ option with a traditional primary-standby setup, i.e. data modification
+ queries are sent only to the primary and are propagated to the
+ standby servers via primary-standby replication, not by the replication
middleware. Care must also be taken that all
transactions either commit or abort on all servers, perhaps
using two-phase commit (<xref linkend="sql-prepare-transaction"/>
@@ -263,7 +263,7 @@ protocol to make nodes agree on a serializable transactional order.
to reduce the communication overhead. Synchronous multimaster
replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests —
- there is no need to partition workloads between master and
+ there is no need to partition workloads between primary and
standby servers, and because the data changes are sent from one
server to another, there is no problem with non-deterministic
functions like <function>random()</function>.
@@ -363,7 +363,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
- <entry>No master server overhead</entry>
+ <entry>No overhead on primary</entry>
<entry align="center">•</entry>
<entry align="center"></entry>
<entry align="center">•</entry>
@@ -387,7 +387,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
- <entry>Master failure will never lose data</entry>
+ <entry>Primary failure will never lose data</entry>
<entry align="center">•</entry>
<entry align="center">•</entry>
<entry align="center">with sync on</entry>
@@ -454,7 +454,7 @@ protocol to make nodes agree on a serializable transactional order.
partitioned by offices, e.g., London and Paris, with a server
in each office. If queries combining London and Paris data
are necessary, an application can query both servers, or
- master/standby replication can be used to keep a read-only copy
+ primary/standby replication can be used to keep a read-only copy
of the other office's data on each server.
</para>
</listitem>
@@ -621,13 +621,13 @@ protocol to make nodes agree on a serializable transactional order.
<para>
In standby mode, the server continuously applies WAL received from the
- master server. The standby server can read WAL from a WAL archive
- (see <xref linkend="guc-restore-command"/>) or directly from the master
+ primary server. The standby server can read WAL from a WAL archive
+ (see <xref linkend="guc-restore-command"/>) or directly from the primary
over a TCP connection (streaming replication). The standby server will
also attempt to restore any WAL found in the standby cluster's
<filename>pg_wal</filename> directory. That typically happens after a server
restart, when the standby replays again WAL that was streamed from the
- master before the restart, but you can also manually copy files to
+ primary before the restart, but you can also manually copy files to
<filename>pg_wal</filename> at any time to have them replayed.
</para>
@@ -652,20 +652,20 @@ protocol to make nodes agree on a serializable transactional order.
<function>pg_promote()</function> is called, or a trigger file is found
(<varname>promote_trigger_file</varname>). Before failover,
any WAL immediately available in the archive or in <filename>pg_wal</filename> will be
- restored, but no attempt is made to connect to the master.
+ restored, but no attempt is made to connect to the primary.
</para>
</sect2>
- <sect2 id="preparing-master-for-standby">
- <title>Preparing the Master for Standby Servers</title>
+ <sect2 id="preparing-primary-for-standby">
+ <title>Preparing the Primary for Standby Servers</title>
<para>
Set up continuous archiving on the primary to an archive directory
accessible from the standby, as described
in <xref linkend="continuous-archiving"/>. The archive location should be
- accessible from the standby even when the master is down, i.e. it should
+ accessible from the standby even when the primary is down, i.e. it should
reside on the standby server itself or another trusted server, not on
- the master server.
+ the primary server.
</para>
<para>
@@ -898,7 +898,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<link linkend="monitoring-pg-stat-replication-view"><structname>
pg_stat_replication</structname></link> view. Large differences between
<function>pg_current_wal_lsn</function> and the view's <literal>sent_lsn</literal> field
- might indicate that the master server is under heavy load, while
+ might indicate that the primary server is under heavy load, while
differences between <literal>sent_lsn</literal> and
<function>pg_last_wal_receive_lsn</function> on the standby might indicate
network delay, or that the standby is under heavy load.
@@ -921,9 +921,9 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<secondary>streaming replication</secondary>
</indexterm>
<para>
- Replication slots provide an automated way to ensure that the master does
+ Replication slots provide an automated way to ensure that the primary does
not remove WAL segments until they have been received by all standbys,
- and that the master does not remove rows which could cause a
+ and that the primary does not remove rows which could cause a
<link linkend="hot-standby-conflict">recovery conflict</link> even when the
standby is disconnected.
</para>
@@ -1001,23 +1001,22 @@ primary_slot_name = 'node_a_slot'
<para>
The cascading replication feature allows a standby server to accept replication
connections and stream WAL records to other standbys, acting as a relay.
- This can be used to reduce the number of direct connections to the master
+ This can be used to reduce the number of direct connections to the primary
and also to minimize inter-site bandwidth overheads.
</para>
<para>
A standby acting as both a receiver and a sender is known as a cascading
- standby. Standbys that are more directly connected to the master are known
+ standby. Standbys that are more directly connected to the primary are known
as upstream servers, while those standby servers further away are downstream
servers. Cascading replication does not place limits on the number or
arrangement of downstream servers, though each standby connects to only
- one upstream server which eventually links to a single master/primary
- server.
+ one upstream server which eventually links to a single primary server.
</para>
<para>
A cascading standby sends not only WAL records received from the
- master but also those restored from the archive. So even if the replication
+ primary but also those restored from the archive. So even if the replication
connection in some upstream connection is terminated, streaming replication
continues downstream for as long as new WAL records are available.
</para>
@@ -1033,8 +1032,8 @@ primary_slot_name = 'node_a_slot'
</para>
<para>
- If an upstream standby server is promoted to become new master, downstream
- servers will continue to stream from the new master if
+ If an upstream standby server is promoted to become the new primary, downstream
+ servers will continue to stream from the new primary if
<varname>recovery_target_timeline</varname> is set to <literal>'latest'</literal> (the default).
</para>
@@ -1120,7 +1119,7 @@ primary_slot_name = 'node_a_slot'
a non-empty value. <varname>synchronous_commit</varname> must also be set to
<literal>on</literal>, but since this is the default value, typically no change is
required. (See <xref linkend="runtime-config-wal-settings"/> and
- <xref linkend="runtime-config-replication-master"/>.)
+ <xref linkend="runtime-config-replication-primary"/>.)
This configuration will cause each commit to wait for
confirmation that the standby has written the commit record to durable
storage.
@@ -1145,8 +1144,8 @@ primary_slot_name = 'node_a_slot'
confirmation that the commit record has been received. These parameters
allow the administrator to specify which standby servers should be
synchronous standbys. Note that the configuration of synchronous
- replication is mainly on the master. Named standbys must be directly
- connected to the master; the master knows nothing about downstream
+ replication is mainly on the primary. Named standbys must be directly
+ connected to the primary; the primary knows nothing about downstream
standby servers using cascaded replication.
</para>
@@ -1504,7 +1503,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)'
<para>
Note that in this mode, the server will apply WAL one file at a
time, so if you use the standby server for queries (see Hot Standby),
- there is a delay between an action in the master and when the
+ there is a delay between an action in the primary and when the
action becomes visible in the standby, corresponding the time it takes
to fill up the WAL file. <varname>archive_timeout</varname> can be used to make that delay
shorter. Also note that you can't combine streaming replication with
@@ -2049,7 +2048,7 @@ if (!triggered)
cleanup of old row versions when there are no transactions that need to
see them to ensure correct visibility of data according to MVCC rules.
However, this rule can only be applied for transactions executing on the
- master. So it is possible that cleanup on the master will remove row
+ primary. So it is possible that cleanup on the primary will remove row
versions that are still visible to a transaction on the standby.
</para>
@@ -2438,7 +2437,7 @@ LOG: database system is ready to accept read only connections
<listitem>
<para>
Valid starting points for standby queries are generated at each
- checkpoint on the master. If the standby is shut down while the master
+ checkpoint on the primary. If the standby is shut down while the primary
is in a shutdown state, it might not be possible to re-enter Hot Standby
until the primary is started up, so that it generates further starting
points in the WAL logs. This situation isn't a problem in the most
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index dfc292872a9..ab44d37a974 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -7362,7 +7362,7 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
the <literal>host</literal> parameter
matches <application>libpq</application>'s default socket directory path.
In a standby server, a database field of <literal>replication</literal>
- matches streaming replication connections made to the master server.
+ matches streaming replication connections made to the primary server.
The database field is of limited usefulness otherwise, because users have
the same password for all databases in the same cluster.
</para>
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e19bb3fd650..7c8629d74ef 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -99,7 +99,7 @@
<para>
A <firstterm>publication</firstterm> can be defined on any physical
- replication master. The node where a publication is defined is referred to
+ replication primary. The node where a publication is defined is referred to
as <firstterm>publisher</firstterm>. A publication is a set of changes
generated from a table or a group of tables, and might also be described as
a change set or replication set. Each publication exists in only one database.
@@ -489,7 +489,7 @@
Because logical replication is based on a similar architecture as
<link linkend="streaming-replication">physical streaming replication</link>,
the monitoring on a publication node is similar to monitoring of a
- physical replication master
+ physical replication primary
(see <xref linkend="streaming-replication-monitoring"/>).
</para>
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 89662cc0a36..04cca7a37b5 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -62,10 +62,10 @@ postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl
(The appropriate invocation of <command>ps</command> varies across different
platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the
- master server process. The command arguments
+ primary server process. The command arguments
shown for it are the same ones used when it was launched. The next five
processes are background worker processes automatically launched by the
- master process. (The <quote>stats collector</quote> process will not be present
+ primary process. (The <quote>stats collector</quote> process will not be present
if you have set the system not to start the statistics collector; likewise
the <quote>autovacuum launcher</quote> process can be disabled.)
Each of the remaining
@@ -3541,7 +3541,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
one row per database, showing database-wide statistics about
query cancels occurring due to conflicts with recovery on standby servers.
This view will only contain information on standby servers, since
- conflicts do not occur on master servers.
+ conflicts do not occur on primary servers.
</para>
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">
diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml
index a826f2b4e47..6b1a8057610 100644
--- a/doc/src/sgml/mvcc.sgml
+++ b/doc/src/sgml/mvcc.sgml
@@ -1619,7 +1619,7 @@ SELECT pg_advisory_lock(q.id) FROM
This level of integrity protection using Serializable transactions
does not yet extend to hot standby mode (<xref linkend="hot-standby"/>).
Because of that, those using hot standby may want to use Repeatable
- Read and explicit locking on the master.
+ Read and explicit locking on the primary.
</para>
</warning>
</sect2>
@@ -1721,10 +1721,10 @@ SELECT pg_advisory_lock(q.id) FROM
<xref linkend="hot-standby"/>). The strictest isolation level currently
supported in hot standby mode is Repeatable Read. While performing all
permanent database writes within Serializable transactions on the
- master will ensure that all standbys will eventually reach a consistent
+ primary will ensure that all standbys will eventually reach a consistent
state, a Repeatable Read transaction run on the standby can sometimes
see a transient state that is inconsistent with any serial execution
- of the transactions on the master.
+ of the transactions on the primary.
</para>
</sect1>
diff --git a/doc/src/sgml/pgstandby.sgml b/doc/src/sgml/pgstandby.sgml
index d8aded43840..66a62559303 100644
--- a/doc/src/sgml/pgstandby.sgml
+++ b/doc/src/sgml/pgstandby.sgml
@@ -73,7 +73,7 @@ restore_command = 'pg_standby <replaceable>archiveDir</replaceable> %f %p %r'
</para>
<para>
There are two ways to fail over to a <quote>warm standby</quote> database server
- when the master server fails:
+ when the primary server fails:
<variablelist>
<varlistentry>
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 20d1fe0ad81..8b00235a516 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1793,7 +1793,7 @@ The commands accepted in replication mode are:
<listitem>
<para>
Current timeline ID. Also useful to check that the standby is
- consistent with the master.
+ consistent with the primary.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index db480be674a..e2a01be895d 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -65,11 +65,11 @@ PostgreSQL documentation
<para>
<application>pg_basebackup</application> can make a base backup from
- not only the master but also the standby. To take a backup from the standby,
+ not only the primary but also the standby. To take a backup from the standby,
set up the standby so that it can accept replication connections (that is, set
<varname>max_wal_senders</varname> and <xref linkend="guc-hot-standby"/>,
and configure <link linkend="auth-pg-hba-conf">host-based authentication</link>).
- You will also need to enable <xref linkend="guc-full-page-writes"/> on the master.
+ You will also need to enable <xref linkend="guc-full-page-writes"/> on the primary.
</para>
<para>
@@ -89,13 +89,13 @@ PostgreSQL documentation
</listitem>
<listitem>
<para>
- If the standby is promoted to the master during online backup, the backup fails.
+ If the standby is promoted to the primary during online backup, the backup fails.
</para>
</listitem>
<listitem>
<para>
All WAL records required for the backup must contain sufficient full-page writes,
- which requires you to enable <varname>full_page_writes</varname> on the master and
+ which requires you to enable <varname>full_page_writes</varname> on the primary and
not to use a tool like <application>pg_compresslog</application> as
<varname>archive_command</varname> to remove full-page writes from WAL files.
</para>
@@ -328,7 +328,7 @@ PostgreSQL documentation
it will use up two connections configured by the
<xref linkend="guc-max-wal-senders"/> parameter. As long as the
client can keep up with write-ahead log received, using this mode
- requires no extra write-ahead logs to be saved on the master.
+ requires no extra write-ahead logs to be saved on the primary.
</para>
<para>
When tar format mode is used, the write-ahead log files will be
diff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml
index 9ae1bf3ab6e..440eed7d4b7 100644
--- a/doc/src/sgml/ref/pg_rewind.sgml
+++ b/doc/src/sgml/ref/pg_rewind.sgml
@@ -43,8 +43,8 @@ PostgreSQL documentation
<para>
<application>pg_rewind</application> is a tool for synchronizing a PostgreSQL cluster
with another copy of the same cluster, after the clusters' timelines have
- diverged. A typical scenario is to bring an old master server back online
- after failover as a standby that follows the new master.
+ diverged. A typical scenario is to bring an old primary server back online
+ after failover as a standby that follows the new primary.
</para>
<para>
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 88210c4a5d3..1fd4ab723c2 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1864,9 +1864,9 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
This is possible because logical replication supports
replication between different major versions of
<productname>PostgreSQL</productname>. The standby can be on the same computer or
- a different computer. Once it has synced up with the master server
+ a different computer. Once it has synced up with the primary server
(running the older version of <productname>PostgreSQL</productname>), you can
- switch masters and make the standby the master and shut down the older
+ switch primaries and make the standby the primary and shut down the older
database instance. Such a switch-over results in only several seconds
of downtime for an upgrade.
</para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index bd9fae544c1..1902f36291d 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -596,8 +596,8 @@
indicate that the already-processed WAL data need not be scanned again,
and then recycles any old log segment files in the <filename>pg_wal</filename>
directory.
- Restartpoints can't be performed more frequently than checkpoints in the
- master because restartpoints can only be performed at checkpoint records.
+ Restartpoints can't be performed more frequently than checkpoints on the
+ primary because restartpoints can only be performed at checkpoint records.
A restartpoint is triggered when a checkpoint record is reached if at
least <varname>checkpoint_timeout</varname> seconds have passed since the last
restartpoint, or if WAL size is about to exceed
--
2.25.0.114.g5b0ca878e0
v1-0006-docs-s-master-root.patchtext/x-diff; charset=us-asciiDownload
From c8d0dc1b2d9f4e2cb8e2e3e44cff825e9bdd73c0 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:18:41 -0700
Subject: [PATCH v1 6/8] docs: s/master/root/
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
doc/src/sgml/ddl.sgml | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 991323d3471..f45c951b2b0 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -4142,12 +4142,12 @@ ALTER INDEX measurement_city_id_logdate_key
<orderedlist spacing="compact">
<listitem>
<para>
- Create the <quote>master</quote> table, from which all of the
+ Create the <quote>root</quote> table, from which all of the
<quote>child</quote> tables will inherit. This table will contain no data. Do not
define any check constraints on this table, unless you intend them
to be applied equally to all child tables. There is no point in
defining any indexes or unique constraints on it, either. For our
- example, the master table is the <structname>measurement</structname>
+ example, the root table is the <structname>measurement</structname>
table as originally defined.
</para>
</listitem>
@@ -4155,8 +4155,8 @@ ALTER INDEX measurement_city_id_logdate_key
<listitem>
<para>
Create several <quote>child</quote> tables that each inherit from
- the master table. Normally, these tables will not add any columns
- to the set inherited from the master. Just as with declarative
+ the root table. Normally, these tables will not add any columns
+ to the set inherited from the root. Just as with declarative
partitioning, these tables are in every way normal
<productname>PostgreSQL</productname> tables (or foreign tables).
</para>
@@ -4244,7 +4244,7 @@ CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate);
We want our application to be able to say <literal>INSERT INTO
measurement ...</literal> and have the data be redirected into the
appropriate child table. We can arrange that by attaching
- a suitable trigger function to the master table.
+ a suitable trigger function to the root table.
If data will be added only to the latest child, we can
use a very simple trigger function:
@@ -4326,7 +4326,7 @@ LANGUAGE plpgsql;
<para>
A different approach to redirecting inserts into the appropriate
child table is to set up rules, instead of a trigger, on the
- master table. For example:
+ root table. For example:
<programlisting>
CREATE RULE measurement_insert_y2006m02 AS
@@ -4351,7 +4351,7 @@ DO INSTEAD
<para>
Be aware that <command>COPY</command> ignores rules. If you want to
use <command>COPY</command> to insert data, you'll need to copy into the
- correct child table rather than directly into the master. <command>COPY</command>
+ correct child table rather than directly into the root. <command>COPY</command>
does fire triggers, so you can use it normally if you use the trigger
approach.
</para>
@@ -4359,7 +4359,7 @@ DO INSTEAD
<para>
Another disadvantage of the rule approach is that there is no simple
way to force an error if the set of rules doesn't cover the insertion
- date; the data will silently go into the master table instead.
+ date; the data will silently go into the root table instead.
</para>
</listitem>
@@ -4473,7 +4473,7 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement;
<programlisting>
ANALYZE measurement;
</programlisting>
- will only process the master table.
+ will only process the root table.
</para>
</listitem>
--
2.25.0.114.g5b0ca878e0
v1-0007-docs-s-master-supervisor.patchtext/x-diff; charset=us-asciiDownload
From 4672f0a40f6232666f3dcf4a0faa4e47c7222294 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:19:32 -0700
Subject: [PATCH v1 7/8] docs: s/master/supervisor/
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
doc/src/sgml/arch-dev.sgml | 5 +++--
doc/src/sgml/runtime.sgml | 6 +++---
doc/src/sgml/start.sgml | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml
index 9ffb8427bf0..7883c3cd827 100644
--- a/doc/src/sgml/arch-dev.sgml
+++ b/doc/src/sgml/arch-dev.sgml
@@ -122,8 +122,9 @@
there is one <firstterm>client process</firstterm> connected to
exactly one <firstterm>server process</firstterm>. As we do not
know ahead of time how many connections will be made, we have to
- use a <firstterm>master process</firstterm> that spawns a new
- server process every time a connection is requested. This master
+ use a <firstterm>supervisor process</firstterm> (also
+ <firstterm>master process</firstterm>) that spawns a new
+ server process every time a connection is requested. This supervisor
process is called <literal>postgres</literal> and listens at a
specified TCP/IP port for incoming connections. Whenever a request
for a connection is detected the <literal>postgres</literal>
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 1fd4ab723c2..331d01b4445 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1292,7 +1292,7 @@ default:\
optimal for <productname>PostgreSQL</productname>. Because of the
way that the kernel implements memory overcommit, the kernel might
terminate the <productname>PostgreSQL</productname> postmaster (the
- master server process) if the memory demands of either
+ supervisor server process) if the memory demands of either
<productname>PostgreSQL</productname> or another process cause the
system to run out of virtual memory.
</para>
@@ -1465,7 +1465,7 @@ $ <userinput>grep Huge /proc/meminfo</userinput>
<para>
There are several ways to shut down the database server. You control
- the type of shutdown by sending different signals to the master
+ the type of shutdown by sending different signals to the supervisor
<command>postgres</command> process.
<variablelist>
@@ -1511,7 +1511,7 @@ $ <userinput>grep Huge /proc/meminfo</userinput>
The server will send <systemitem>SIGQUIT</systemitem> to all child
processes and wait for them to terminate. If any do not terminate
within 5 seconds, they will be sent <systemitem>SIGKILL</systemitem>.
- The master server process exits as soon as all child processes have
+ The supervisor server process exits as soon as all child processes have
exited, without doing normal database shutdown processing.
This will lead to recovery (by
replaying the WAL log) upon next start-up. This is recommended
diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml
index 2a47f69079b..9bb5c1a6d5d 100644
--- a/doc/src/sgml/start.sgml
+++ b/doc/src/sgml/start.sgml
@@ -113,7 +113,7 @@
From that point on, the client and the new server process
communicate without intervention by the original
<filename>postgres</filename> process. Thus, the
- master server process is always running, waiting for
+ supervisor server process is always running, waiting for
client connections, whereas client and associated server processes
come and go. (All of this is of course invisible to the user. We
only mention it here for completeness.)
--
2.25.0.114.g5b0ca878e0
v1-0008-docs-WIP-multi-master-rephrasing.patchtext/x-diff; charset=us-asciiDownload
From ec9bc191e2bbe41c43a9483bf20f3eed242045c5 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:39:15 -0700
Subject: [PATCH v1 8/8] docs: WIP multi-master rephrasing.
Perhaps active-active would be a better term?
Author:
Reviewed-By:
Discussion: https://postgr.es/m/
Backpatch:
---
doc/src/sgml/high-availability.sgml | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 6a9184f314e..198fd5b66ed 100644
--- a/doc/src/sgml/high-availability.sgml
+++ b/doc/src/sgml/high-availability.sgml
@@ -38,10 +38,10 @@
<para>
Some solutions deal with synchronization by allowing only one
server to modify the data. Servers that can modify data are
- called read/write, <firstterm>master</firstterm> or <firstterm>primary</firstterm> servers.
- Servers that track changes in the master are called <firstterm>standby</firstterm>
+ called read/write, <firstterm>primary</firstterm> (or <firstterm>master</firstterm>) servers.
+ Servers that track changes in the primary are called <firstterm>standby</firstterm>
or <firstterm>secondary</firstterm> servers. A standby server that cannot be connected
- to until it is promoted to a master server is called a <firstterm>warm
+ to until it is promoted to a primary server is called a <firstterm>warm
standby</firstterm> server, and one that can accept connections and serves read-only
queries is called a <firstterm>hot standby</firstterm> server.
</para>
@@ -177,14 +177,14 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Trigger-Based Master-Standby Replication</term>
+ <term>Trigger-Based Primary-Standby Replication</term>
<listitem>
<para>
- A master-standby replication setup sends all data modification
- queries to the master server. The master server asynchronously
+ A primary-standby replication setup sends all data modification
+ queries to the primary server. The primary server asynchronously
sends data changes to the standby server. The standby can answer
- read-only queries while the master server is running. The
+ read-only queries while the primary server is running. The
standby server is ideal for data warehouse queries.
</para>
@@ -233,14 +233,14 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Asynchronous Multimaster Replication</term>
+ <term>Asynchronous Multi-Source Replication</term>
<listitem>
<para>
For servers that are not regularly connected or have slow
communication links, like laptops or
remote servers, keeping data consistent among servers is a
- challenge. Using asynchronous multimaster replication, each
+ challenge. Using asynchronous multi-source replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
@@ -250,17 +250,17 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Synchronous Multimaster Replication</term>
+ <term>Synchronous Multi-Source Replication</term>
<listitem>
<para>
- In synchronous multimaster replication, each server can accept
+ In synchronous multi-source replication, each server can accept
write requests, and modified data is transmitted from the
original server to every other server before each transaction
commits. Heavy write activity can cause excessive locking and
commit delays, leading to poor performance. Read requests can
be sent to any server. Some implementations use shared disk
- to reduce the communication overhead. Synchronous multimaster
+ to reduce the communication overhead. Synchronous multi-source
replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests —
there is no need to partition workloads between primary and
@@ -351,7 +351,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
- <entry>Allows multiple master servers</entry>
+ <entry>Allows multiple primary servers</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
--
2.25.0.114.g5b0ca878e0
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
Thanks for picking this up!
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.
FWIW, I've never really liked the name postmaster as I don't think it conveys
meaning. I support renaming to supervisor or a similar term.
cheers ./daniel
On Tue, Jun 16, 2020 at 7:04 AM Daniel Gustafsson <daniel@yesql.se> wrote:
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
Thanks for picking this up!
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.FWIW, I've never really liked the name postmaster as I don't think it conveys
meaning. I support renaming to supervisor or a similar term.
+1. Postmaster has always sounded like a mailer daemon or something,
which we ain't.
On Tue, Jun 16, 2020 at 09:53:34AM +1200, Thomas Munro wrote:
On Tue, Jun 16, 2020 at 7:04 AM Daniel Gustafsson <daniel@yesql.se> wrote:
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
Thanks for picking this up!
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.FWIW, I've never really liked the name postmaster as I don't think it conveys
meaning. I support renaming to supervisor or a similar term.+1. Postmaster has always sounded like a mailer daemon or something,
which we ain't.
Postmaster is historically confused with the postmaster email account:
https://en.wikipedia.org/wiki/Postmaster_(computing)
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
Daniel Gustafsson <daniel@yesql.se> writes:
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.
FWIW, I've never really liked the name postmaster as I don't think it conveys
meaning. I support renaming to supervisor or a similar term.
Meh. That's carrying PC naming foibles to the point where they
negatively affect our users (by breaking start scripts and such).
I think we should leave this alone.
regards, tom lane
Hi,
On 2020-06-15 19:54:25 -0400, Tom Lane wrote:
Daniel Gustafsson <daniel@yesql.se> writes:
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.FWIW, I've never really liked the name postmaster as I don't think it conveys
meaning. I support renaming to supervisor or a similar term.Meh. That's carrying PC naming foibles to the point where they
negatively affect our users (by breaking start scripts and such).
I think we should leave this alone.
postmaster is just a symlink, which we very well could just leave in
place... I was really just thinking of the code level stuff. And I think
there's some clarity reasons to rename it as well (see comments by
others in the thread).
Anyway, for now my focus is on patches in the series...
- Andres
On Mon, Jun 15, 2020 at 4:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Meh. That's carrying PC naming foibles to the point where they
negatively affect our users (by breaking start scripts and such).
I think we should leave this alone.
+1. Apart from the practical considerations, I just don't see a
problem with the word postmaster. My mother is a postmistress.
I'm in favor of updating any individual instances of the word "master"
to the preferred equivalent in code and code comments, though.
--
Peter Geoghegan
On Tue, Jun 16, 2020 at 2:23 AM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2020-06-15 19:54:25 -0400, Tom Lane wrote:
Daniel Gustafsson <daniel@yesql.se> writes:
On 15 Jun 2020, at 20:22, Andres Freund <andres@anarazel.de> wrote:
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.FWIW, I've never really liked the name postmaster as I don't think it
conveys
meaning. I support renaming to supervisor or a similar term.
Meh. That's carrying PC naming foibles to the point where they
negatively affect our users (by breaking start scripts and such).
I think we should leave this alone.postmaster is just a symlink, which we very well could just leave in
place... I was really just thinking of the code level stuff. And I think
there's some clarity reasons to rename it as well (see comments by
others in the thread).
Is the symlink even used? If not we could just get rid of it.
I'd be more worried about for example postmaster.pid, which would break a
*lot* of scripts and integrations. postmaster is also exposed in the system
catalogs.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On 6/16/20 3:26 AM, Magnus Hagander wrote:
On Tue, Jun 16, 2020 at 2:23 AM Andres Freund wrote:
postmaster is just a symlink, which we very well could just leave in
place... I was really just thinking of the code level stuff. And I think
there's some clarity reasons to rename it as well (see comments by
others in the thread).Is the symlink even used? If not we could just get rid of it.
I am pretty sure that last time I checked Devrim was still using it in his
systemd unit file bundled with the PGDG rpms, although that was probably a
couple of years ago.
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
On 6/16/20 9:10 AM, Joe Conway wrote:
On 6/16/20 3:26 AM, Magnus Hagander wrote:
On Tue, Jun 16, 2020 at 2:23 AM Andres Freund wrote:
postmaster is just a symlink, which we very well could just leave in
place... I was really just thinking of the code level stuff. And I think
there's some clarity reasons to rename it as well (see comments by
others in the thread).Is the symlink even used? If not we could just get rid of it.
I am pretty sure that last time I checked Devrim was still using it in his
systemd unit file bundled with the PGDG rpms, although that was probably a
couple of years ago.
Just checked a recent install and it's there.
Honestly, I think I'm with Tom, and we can just let this one alone.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Magnus Hagander <magnus@hagander.net> writes:
I'd be more worried about for example postmaster.pid, which would break a
*lot* of scripts and integrations. postmaster is also exposed in the system
catalogs.
Oooh, that's an excellent point. A lot of random stuff knows that file
name.
To be clear, I'm not against removing incidental uses of the word
"master". But the specific case of "postmaster" seems a little too
far ingrained to be worth changing.
regards, tom lane
On Mon, Jun 15, 2020 at 11:22:35AM -0700, Andres Freund wrote:
0006: docs: s/master/root/
Here using root seems a lot better than master anyway (master seems
confusing in regard to inheritance scenarios). But perhaps parent
would be better? Went with root since it's about the topmost table.
Because we allow multiple levels of inheritance, I have always wanted a
clear term for the top-most parent.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
Hi Andres,
Thanks for doing this!
On 6/15/20 2:22 PM, Andres Freund wrote:
We've removed the use of "slave" from most of the repo (one use
remained, included here), but we didn't do the same for master. In the
attached series I replaced most of the uses.0001: tap tests: s/master/primary/
Pretty clear cut imo.
Nothing to argue with here as far as I can see. It's a lot of churn,
though, so the sooner it goes in the better so people can update for the
next CF.
0002: code: s/master/primary/
This also includes a few minor other changes (s/in master/on the
primary/, a few 'the's added). Perhaps it'd be better to do those
separately?
I would only commit the grammar/language separately if the commit as a
whole does not go in. Some of them really do make the comments much
clearer beyond just in/on.
I think the user-facing messages, e.g.:
- errhint("Make sure the configuration parameter \"%s\" is set on the
master server.",
+ errhint("Make sure the configuration parameter \"%s\" is set on the
primary server.",
should go in no matter what so we are consistent with our documentation.
Same for the postgresql.conf updates.
0003: code: s/master/leader/
This feels pretty obvious. We've largely used the leader / worker
terminology, but there were a few uses of master left.
Yeah, leader already outnumbers master by a lot. Looks good.
0004: code: s/master/$other/
This is most of the remaining uses of master in code. A number of
references to 'master' in the context of toast, a few uses of 'master
copy'. I guess some of these are a bit less clear cut.
Not sure I love authoritative, e.g.
+ * fullPageWrites is the authoritative value used by all backends to
and
+ * grabbed a WAL insertion lock to read the authoritative value in
Possibly "shared"?
+ * Create the Tcl interpreter subsidiary to pltcl_hold_interp.
Maybe use "worker" here? Not much we can do about the Tcl function name,
though. It's pretty localized, though, so may not matter much.
0005: docs: s/master/primary/
These seem mostly pretty straightforward to me. The changes in
high-availability.sgml probably deserve the most attention.
+ on primary and the current time on the standby. Delays in transfer
on *the* primary
I saw a few places where readability could be improved, but this patch
did not make any of them worse, and did make a few better.
0006: docs: s/master/root/
Here using root seems a lot better than master anyway (master seems
confusing in regard to inheritance scenarios). But perhaps parent
would be better? Went with root since it's about the topmost table.
I looked through to see if there was an instance of parent that should
be changed to root but I didn't see any. Even if there are, it's no
worse than before.
0007: docs: s/master/supervisor/
I guess this could be a bit more contentious. Supervisor seems clearer
to me, but I can see why people would disagree. See also later point
about changes I have not done at this stage.
Supervisor seems OK to me, but the postmaster has a special place since
there is only one of them. Main supervisor, root supervisor?
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.
Why not multi-primary?
After this series there are only two widespread use of 'master' in the
tree.
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.
FWIW, I don't consider this to be a very big change from an end-user
perspective. I don't think the majority of users interact directly with
the postmaster, rather they are using systemd, pg_ctl, pg_ctlcluster, etc.
As for postmaster.pid and postmaster.opts, we have renamed plenty of
things in the past so this is just one more. They'd be better and
clearer as postgresql.pid and postgresql.opts, IMO.
Overall I'm +.5 because I may just be ignorant of the pain this will cause.
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.
Happily this has no end-user impact. I think "main" is a good
alternative but I agree this feels like a separate topic.
One last thing -- are we considering back-patching any/all of this?
Regards,
--
-David
david@pgmasters.net
Hi,
On 2020-06-16 17:14:57 -0400, David Steele wrote:
On 6/15/20 2:22 PM, Andres Freund wrote:
We've removed the use of "slave" from most of the repo (one use
remained, included here), but we didn't do the same for master. In the
attached series I replaced most of the uses.0001: tap tests: s/master/primary/
Pretty clear cut imo.Nothing to argue with here as far as I can see. It's a lot of churn, though,
so the sooner it goes in the better so people can update for the next CF.
Yea, unless somebody protests I'm planning to push this part soon.
0004: code: s/master/$other/
This is most of the remaining uses of master in code. A number of
references to 'master' in the context of toast, a few uses of 'master
copy'. I guess some of these are a bit less clear cut.Not sure I love authoritative, e.g.
+ * fullPageWrites is the authoritative value used by all backends to
and
+ * grabbed a WAL insertion lock to read the authoritative value in
Possibly "shared"?
I don't think shared is necessarily correct for all of these. E.g. in
the GetRedoRecPtr() there's two shared values at play, but only one is
"authoritative".
+ * Create the Tcl interpreter subsidiary to pltcl_hold_interp.
Maybe use "worker" here? Not much we can do about the Tcl function name,
though. It's pretty localized, though, so may not matter much.
I don't think it matters much what we use here
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.Why not multi-primary?
My understanding of primary is that there really can't be two things
that are primary in relation to each other. active/active is probably
the most common term in use besides multi-master.
One last thing -- are we considering back-patching any/all of this?
I don't think there's a good reason to do so.
Thanks for the look!
Greetings,
Andres Freund
On 6/16/20 6:27 PM, Andres Freund wrote:
On 2020-06-16 17:14:57 -0400, David Steele wrote:
On 6/15/20 2:22 PM, Andres Freund wrote:
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.Why not multi-primary?
My understanding of primary is that there really can't be two things
that are primary in relation to each other.
Well, I think the same is true for multi-master and that's pretty common.
active/active is probably
the most common term in use besides multi-master.
Works for me and can always be updated later if we come up with
something better. At least active-active will be easier to search for.
One last thing -- are we considering back-patching any/all of this?
I don't think there's a good reason to do so.
I was thinking of back-patching pain but if you don't think that's an
issue then I'm not worried about it.
Thanks for the look!
You are welcome!
--
-David
david@pgmasters.net
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.
I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.
I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted.
"master" is the default branch name established by git, is it not? Not
something we picked. I don't feel like we need to be out front of the
rest of the world in changing that. It'd be a cheaper change than some of
the other proposals in this thread, no doubt, but it would still create
confusion for new hackers who are used to the standard git convention.
regards, tom lane
On Tue, 16 Jun 2020 at 19:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted."master" is the default branch name established by git, is it not? Not
something we picked. I don't feel like we need to be out front of the
rest of the world in changing that. It'd be a cheaper change than some of
the other proposals in this thread, no doubt, but it would still create
confusion for new hackers who are used to the standard git convention.
While it is the default I expect that will change soon. Github is planning
on making main the default.
https://www.zdnet.com/article/github-to-replace-master-with-alternative-term-to-avoid-slavery-references/
Many projects are moving from master to main.
I expect it will be less confusing than you think.
Dave Cramer
www.postgres.rocks
On 2020-Jun-16, Tom Lane wrote:
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted."master" is the default branch name established by git, is it not? Not
something we picked. I don't feel like we need to be out front of the
rest of the world in changing that. It'd be a cheaper change than some of
the other proposals in this thread, no doubt, but it would still create
confusion for new hackers who are used to the standard git convention.
Git itself is discussing this:
https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t
and it seems that "main" is the winning choice.
(Personally) I would leave master to have root, would leave root to have
default, would leave default to have primary, would leave primary to
have base, would leave base to have main, would leave main to have
mainline.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
On 2020-Jun-16, Tom Lane wrote:
"master" is the default branch name established by git, is it not? Not
something we picked.
Git itself is discussing this:
https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t
and it seems that "main" is the winning choice.
Oh, interesting. If they do change I'd be happy to follow suit.
But let's wait and see what they do, rather than possibly ending
up with our own private convention.
regards, tom lane
On Wed, Jun 17, 2020 at 3:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
On 2020-Jun-16, Tom Lane wrote:
"master" is the default branch name established by git, is it not? Not
something we picked.Git itself is discussing this:
https://public-inbox.org/git/41438A0F-50E4-4E58-A3A7-3DAAECB5576B@jramsay.com.au/T/#t
and it seems that "main" is the winning choice.
Oh, interesting. If they do change I'd be happy to follow suit.
But let's wait and see what they do, rather than possibly ending
up with our own private convention.
I'm +1 for changing it (with good warning time to handle the buildfarm
situation), but also very much +1 for waiting to see exactly what upstream
(git) decides on and make sure we change to the same. The worst possible
combination would be that we change it to something that's *different* than
upstream ends up with (even if upstream ends up being configurable).
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On Tue, 2020-06-16 at 19:55 -0400, Tom Lane wrote:
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted."master" is the default branch name established by git, is it not? Not
something we picked. I don't feel like we need to be out front of the
rest of the world in changing that. It'd be a cheaper change than some of
the other proposals in this thread, no doubt, but it would still create
confusion for new hackers who are used to the standard git convention.
I have the feeling that all this is going somewhat too far.
I feel fine with removing the term "slave", even though I have no qualms
about enslaving machines.
But the term "master" is not restricted to slavery. It can just as well
imply expert knowledge (think academic degree), and it can denote someone
with the authority to command (there is nothing wrong with "servant", right?
Or do we have to abolish the term "server" too?).
I appreciate that other people's sensitivities might be different, and I
don't want to start a fight over it. But renaming things makes the code
history harder to read, so it should be used with moderation.
Yours,
Laurenz Albe
On Mon, Jun 15, 2020 at 8:23 PM Andres Freund <andres@anarazel.de> wrote:
Hi,
We've removed the use of "slave" from most of the repo (one use
remained, included here), but we didn't do the same for master. In the
attached series I replaced most of the uses.0001: tap tests: s/master/primary/
Pretty clear cut imo.0002: code: s/master/primary/
This also includes a few minor other changes (s/in master/on the
primary/, a few 'the's added). Perhaps it'd be better to do those
separately?0003: code: s/master/leader/
This feels pretty obvious. We've largely used the leader / worker
terminology, but there were a few uses of master left.0004: code: s/master/$other/
This is most of the remaining uses of master in code. A number of
references to 'master' in the context of toast, a few uses of 'master
copy'. I guess some of these are a bit less clear cut.0005: docs: s/master/primary/
These seem mostly pretty straightforward to me. The changes in
high-availability.sgml probably deserve the most attention.0006: docs: s/master/root/
Here using root seems a lot better than master anyway (master seems
confusing in regard to inheritance scenarios). But perhaps parent
would be better? Went with root since it's about the topmost table.0007: docs: s/master/supervisor/
I guess this could be a bit more contentious. Supervisor seems clearer
to me, but I can see why people would disagree. See also later point
about changes I have not done at this stage.0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.After this series there are only two widespread use of 'master' in the
tree.
1) 'postmaster'. As changing that would be somewhat invasive, the word
is a bit more ambiguous, and it's largely just internal, I've left
this alone for now. I personally would rather see this renamed as
supervisor, which'd imo actually would also be a lot more
descriptive. I'm willing to do the work, but only if there's at least
some agreement.
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.
In looking at this I realize we also have exactly one thing referred to as
"blacklist" in our codebase, which is the "enum blacklist" (and then a
small internal variable in pgindent). AFAICT, it's not actually exposed to
userspace anywhere, so we could probably make the attached change to
blocklist at no "cost" (the only thing changed is the name of the hash
table, and we definitely change things like that in normal releases with no
specific thought on backwards compat).
//Magnus
Attachments:
blocklist.patchtext/x-patch; charset=US-ASCII; name=blocklist.patchDownload
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 14a8690019..6f93b41e4f 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -75,7 +75,7 @@
#define PARALLEL_KEY_PENDING_SYNCS UINT64CONST(0xFFFFFFFFFFFF000B)
#define PARALLEL_KEY_REINDEX_STATE UINT64CONST(0xFFFFFFFFFFFF000C)
#define PARALLEL_KEY_RELMAPPER_STATE UINT64CONST(0xFFFFFFFFFFFF000D)
-#define PARALLEL_KEY_ENUMBLACKLIST UINT64CONST(0xFFFFFFFFFFFF000E)
+#define PARALLEL_KEY_ENUMBLOCKLIST UINT64CONST(0xFFFFFFFFFFFF000E)
/* Fixed-size parallel state. */
typedef struct FixedParallelState
@@ -211,7 +211,7 @@ InitializeParallelDSM(ParallelContext *pcxt)
Size pendingsyncslen = 0;
Size reindexlen = 0;
Size relmapperlen = 0;
- Size enumblacklistlen = 0;
+ Size enumblocklistlen = 0;
Size segsize = 0;
int i;
FixedParallelState *fps;
@@ -267,8 +267,8 @@ InitializeParallelDSM(ParallelContext *pcxt)
shm_toc_estimate_chunk(&pcxt->estimator, reindexlen);
relmapperlen = EstimateRelationMapSpace();
shm_toc_estimate_chunk(&pcxt->estimator, relmapperlen);
- enumblacklistlen = EstimateEnumBlacklistSpace();
- shm_toc_estimate_chunk(&pcxt->estimator, enumblacklistlen);
+ enumblocklistlen = EstimateEnumBlocklistSpace();
+ shm_toc_estimate_chunk(&pcxt->estimator, enumblocklistlen);
/* If you add more chunks here, you probably need to add keys. */
shm_toc_estimate_keys(&pcxt->estimator, 11);
@@ -348,7 +348,7 @@ InitializeParallelDSM(ParallelContext *pcxt)
char *error_queue_space;
char *session_dsm_handle_space;
char *entrypointstate;
- char *enumblacklistspace;
+ char *enumblocklistspace;
Size lnamelen;
/* Serialize shared libraries we have loaded. */
@@ -404,11 +404,11 @@ InitializeParallelDSM(ParallelContext *pcxt)
shm_toc_insert(pcxt->toc, PARALLEL_KEY_RELMAPPER_STATE,
relmapperspace);
- /* Serialize enum blacklist state. */
- enumblacklistspace = shm_toc_allocate(pcxt->toc, enumblacklistlen);
- SerializeEnumBlacklist(enumblacklistspace, enumblacklistlen);
- shm_toc_insert(pcxt->toc, PARALLEL_KEY_ENUMBLACKLIST,
- enumblacklistspace);
+ /* Serialize enum blocklist state. */
+ enumblocklistspace = shm_toc_allocate(pcxt->toc, enumblocklistlen);
+ SerializeEnumBlocklist(enumblocklistspace, enumblocklistlen);
+ shm_toc_insert(pcxt->toc, PARALLEL_KEY_ENUMBLOCKLIST,
+ enumblocklistspace);
/* Allocate space for worker information. */
pcxt->worker = palloc0(sizeof(ParallelWorkerInfo) * pcxt->nworkers);
@@ -1257,7 +1257,7 @@ ParallelWorkerMain(Datum main_arg)
char *pendingsyncsspace;
char *reindexspace;
char *relmapperspace;
- char *enumblacklistspace;
+ char *enumblocklistspace;
StringInfoData msgbuf;
char *session_dsm_handle_space;
@@ -1449,10 +1449,10 @@ ParallelWorkerMain(Datum main_arg)
relmapperspace = shm_toc_lookup(toc, PARALLEL_KEY_RELMAPPER_STATE, false);
RestoreRelationMap(relmapperspace);
- /* Restore enum blacklist. */
- enumblacklistspace = shm_toc_lookup(toc, PARALLEL_KEY_ENUMBLACKLIST,
+ /* Restore enum blocklist. */
+ enumblocklistspace = shm_toc_lookup(toc, PARALLEL_KEY_ENUMBLOCKLIST,
false);
- RestoreEnumBlacklist(enumblacklistspace);
+ RestoreEnumBlocklist(enumblocklistspace);
/* Attach to the leader's serializable transaction, if SERIALIZABLE. */
AttachSerializableXact(fps->serializable_xact_handle);
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index 27e4100a6f..0221313433 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -41,10 +41,10 @@ Oid binary_upgrade_next_pg_enum_oid = InvalidOid;
* committed; otherwise, they might get into indexes where we can't clean
* them up, and then if the transaction rolls back we have a broken index.
* (See comments for check_safe_enum_use() in enum.c.) Values created by
- * EnumValuesCreate are *not* blacklisted; we assume those are created during
+ * EnumValuesCreate are *not* blocklisted; we assume those are created during
* CREATE TYPE, so they can't go away unless the enum type itself does.
*/
-static HTAB *enum_blacklist = NULL;
+static HTAB *enum_blocklist = NULL;
static void RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems);
static int sort_order_cmp(const void *p1, const void *p2);
@@ -181,10 +181,10 @@ EnumValuesDelete(Oid enumTypeOid)
}
/*
- * Initialize the enum blacklist for this transaction.
+ * Initialize the enum blocklist for this transaction.
*/
static void
-init_enum_blacklist(void)
+init_enum_blocklist(void)
{
HASHCTL hash_ctl;
@@ -192,7 +192,7 @@ init_enum_blacklist(void)
hash_ctl.keysize = sizeof(Oid);
hash_ctl.entrysize = sizeof(Oid);
hash_ctl.hcxt = TopTransactionContext;
- enum_blacklist = hash_create("Enum value blacklist",
+ enum_blocklist = hash_create("Enum value blocklist",
32,
&hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
@@ -491,12 +491,12 @@ restart:
table_close(pg_enum, RowExclusiveLock);
- /* Set up the blacklist hash if not already done in this transaction */
- if (enum_blacklist == NULL)
- init_enum_blacklist();
+ /* Set up the blocklist hash if not already done in this transaction */
+ if (enum_blocklist == NULL)
+ init_enum_blocklist();
- /* Add the new value to the blacklist */
- (void) hash_search(enum_blacklist, &newOid, HASH_ENTER, NULL);
+ /* Add the new value to the blocklist */
+ (void) hash_search(enum_blocklist, &newOid, HASH_ENTER, NULL);
}
@@ -585,19 +585,19 @@ RenameEnumLabel(Oid enumTypeOid,
/*
- * Test if the given enum value is on the blacklist
+ * Test if the given enum value is on the blocklist
*/
bool
-EnumBlacklisted(Oid enum_id)
+EnumBlocklisted(Oid enum_id)
{
bool found;
- /* If we've made no blacklist table, all values are safe */
- if (enum_blacklist == NULL)
+ /* If we've made no blocklist table, all values are safe */
+ if (enum_blocklist == NULL)
return false;
/* Else, is it in the table? */
- (void) hash_search(enum_blacklist, &enum_id, HASH_FIND, &found);
+ (void) hash_search(enum_blocklist, &enum_id, HASH_FIND, &found);
return found;
}
@@ -609,11 +609,11 @@ void
AtEOXact_Enum(void)
{
/*
- * Reset the blacklist table, as all our enum values are now committed.
+ * Reset the blocklist table, as all our enum values are now committed.
* The memory will go away automatically when TopTransactionContext is
* freed; it's sufficient to clear our pointer.
*/
- enum_blacklist = NULL;
+ enum_blocklist = NULL;
}
@@ -692,12 +692,12 @@ sort_order_cmp(const void *p1, const void *p2)
}
Size
-EstimateEnumBlacklistSpace(void)
+EstimateEnumBlocklistSpace(void)
{
size_t entries;
- if (enum_blacklist)
- entries = hash_get_num_entries(enum_blacklist);
+ if (enum_blocklist)
+ entries = hash_get_num_entries(enum_blocklist);
else
entries = 0;
@@ -706,7 +706,7 @@ EstimateEnumBlacklistSpace(void)
}
void
-SerializeEnumBlacklist(void *space, Size size)
+SerializeEnumBlocklist(void *space, Size size)
{
Oid *serialized = (Oid *) space;
@@ -714,15 +714,15 @@ SerializeEnumBlacklist(void *space, Size size)
* Make sure the hash table hasn't changed in size since the caller
* reserved the space.
*/
- Assert(size == EstimateEnumBlacklistSpace());
+ Assert(size == EstimateEnumBlocklistSpace());
/* Write out all the values from the hash table, if there is one. */
- if (enum_blacklist)
+ if (enum_blocklist)
{
HASH_SEQ_STATUS status;
Oid *value;
- hash_seq_init(&status, enum_blacklist);
+ hash_seq_init(&status, enum_blocklist);
while ((value = (Oid *) hash_seq_search(&status)))
*serialized++ = *value;
}
@@ -738,11 +738,11 @@ SerializeEnumBlacklist(void *space, Size size)
}
void
-RestoreEnumBlacklist(void *space)
+RestoreEnumBlocklist(void *space)
{
Oid *serialized = (Oid *) space;
- Assert(!enum_blacklist);
+ Assert(!enum_blocklist);
/*
* As a special case, if the list is empty then don't even bother to
@@ -753,9 +753,9 @@ RestoreEnumBlacklist(void *space)
return;
/* Read all the values into a new hash table. */
- init_enum_blacklist();
+ init_enum_blocklist();
do
{
- hash_search(enum_blacklist, serialized++, HASH_ENTER, NULL);
+ hash_search(enum_blocklist, serialized++, HASH_ENTER, NULL);
} while (OidIsValid(*serialized));
}
diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c
index 5ead794e34..ac78ca853f 100644
--- a/src/backend/utils/adt/enum.c
+++ b/src/backend/utils/adt/enum.c
@@ -83,12 +83,12 @@ check_safe_enum_use(HeapTuple enumval_tup)
return;
/*
- * Check if the enum value is blacklisted. If not, it's safe, because it
+ * Check if the enum value is blocklisted. If not, it's safe, because it
* was made during CREATE TYPE AS ENUM and can't be shorter-lived than its
* owning type. (This'd also be false for values made by other
* transactions; but the previous tests should have handled all of those.)
*/
- if (!EnumBlacklisted(en->oid))
+ if (!EnumBlocklisted(en->oid))
return;
/*
diff --git a/src/include/catalog/pg_enum.h b/src/include/catalog/pg_enum.h
index b28d441ba7..fcf9bc537c 100644
--- a/src/include/catalog/pg_enum.h
+++ b/src/include/catalog/pg_enum.h
@@ -53,10 +53,10 @@ extern void AddEnumLabel(Oid enumTypeOid, const char *newVal,
bool skipIfExists);
extern void RenameEnumLabel(Oid enumTypeOid,
const char *oldVal, const char *newVal);
-extern bool EnumBlacklisted(Oid enum_id);
-extern Size EstimateEnumBlacklistSpace(void);
-extern void SerializeEnumBlacklist(void *space, Size size);
-extern void RestoreEnumBlacklist(void *space);
+extern bool EnumBlocklisted(Oid enum_id);
+extern Size EstimateEnumBlocklistSpace(void);
+extern void SerializeEnumBlocklist(void *space, Size size);
+extern void RestoreEnumBlocklist(void *space);
extern void AtEOXact_Enum(void);
#endif /* PG_ENUM_H */
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index 457e328824..67582048b9 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -55,11 +55,11 @@ $excludes ||= "$code_base/src/tools/pgindent/exclude_file_patterns"
# according to <stdbool.h>), and may include some names we don't want
# treated as typedefs, although various headers that some builds include
# might make them so. For the moment we just hardwire a whitelist of names
-# to add and a blacklist of names to remove; eventually this may need to be
+# to add and a blocklist of names to remove; eventually this may need to be
# easier to configure. Note that the typedefs need trailing newlines.
my @whitelist = ("bool\n");
-my %blacklist = map { +"$_\n" => 1 } qw(
+my %blocklist = map { +"$_\n" => 1 } qw(
ANY FD_SET U abs allocfunc boolean date digit ilist interval iterator other
pointer printfunc reference string timestamp type wrap
);
@@ -137,8 +137,8 @@ sub load_typedefs
# add whitelisted entries
push(@typedefs, @whitelist);
- # remove blacklisted entries
- @typedefs = grep { !$blacklist{$_} } @typedefs;
+ # remove blocklisted entries
+ @typedefs = grep { !$blocklist{$_} } @typedefs;
# write filtered typedefs
my $filter_typedefs_fh = new File::Temp(TEMPLATE => "pgtypedefXXXXX");
On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred to
as "blacklist" in our codebase, which is the "enum blacklist" (and then
a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name of
the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).
+1. Though if we are doing that, we should also handle "whitelist" too,
as this attached patch does. It's mostly in comments (with one Perl
variable), but I switched the language around to use "allowed"
Jonathan
Attachments:
allowed.patchtext/plain; charset=UTF-8; name=allowed.patch; x-mac-creator=0; x-mac-type=0Download
diff --git a/contrib/postgres_fdw/postgres_fdw.h b/contrib/postgres_fdw/postgres_fdw.h
index eef410db39..3364a203c7 100644
--- a/contrib/postgres_fdw/postgres_fdw.h
+++ b/contrib/postgres_fdw/postgres_fdw.h
@@ -77,7 +77,7 @@ typedef struct PgFdwRelationInfo
bool use_remote_estimate;
Cost fdw_startup_cost;
Cost fdw_tuple_cost;
- List *shippable_extensions; /* OIDs of whitelisted extensions */
+ List *shippable_extensions; /* OIDs of allowed extensions */
/* Cached catalog information. */
ForeignTable *table;
diff --git a/contrib/postgres_fdw/shippable.c b/contrib/postgres_fdw/shippable.c
index 3433c19712..8a3dca9e90 100644
--- a/contrib/postgres_fdw/shippable.c
+++ b/contrib/postgres_fdw/shippable.c
@@ -111,7 +111,8 @@ InitializeShippableCache(void)
*
* Right now "shippability" is exclusively a function of whether the object
* belongs to an extension declared by the user. In the future we could
- * additionally have a whitelist of functions/operators declared one at a time.
+ * additionally have a list of allowed functions/operators declared one at a
+ * time.
*/
static bool
lookup_shippable(Oid objectId, Oid classId, PgFdwRelationInfo *fpinfo)
diff --git a/src/backend/access/hash/hashvalidate.c b/src/backend/access/hash/hashvalidate.c
index 6f14a9fb45..744d22b042 100644
--- a/src/backend/access/hash/hashvalidate.c
+++ b/src/backend/access/hash/hashvalidate.c
@@ -309,7 +309,7 @@ check_hash_func_signature(Oid funcid, int16 amprocnum, Oid argtype)
* that are different from but physically compatible with the opclass
* datatype. In some of these cases, even a "binary coercible" check
* fails because there's no relevant cast. For the moment, fix it by
- * having a whitelist of allowed cases. Test the specific function
+ * having a list of allowed cases. Test the specific function
* identity, not just its input type, because hashvarlena() takes
* INTERNAL and allowing any such function seems too scary.
*/
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index e992d1bbfc..267fc7adfd 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -632,9 +632,9 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
/*
* Check if blocked_pid is waiting for a safe snapshot. We could in
* theory check the resulting array of blocker PIDs against the
- * interesting PIDs whitelist, but since there is no danger of autovacuum
- * blocking GetSafeSnapshot there seems to be no point in expending cycles
- * on allocating a buffer and searching for overlap; so it's presently
+ * interesting list of allowed PIDs, but since there is no danger of
+ * autovacuum blocking GetSafeSnapshot there seems to be no point in expending
+ * cycles on allocating a buffer and searching for overlap; so it's presently
* sufficient for the isolation tester's purposes to use a single element
* buffer and check if the number of safe snapshot blockers is non-zero.
*/
diff --git a/src/tools/pginclude/README b/src/tools/pginclude/README
index a067c7f472..d1906a3516 100644
--- a/src/tools/pginclude/README
+++ b/src/tools/pginclude/README
@@ -64,7 +64,7 @@ with no prerequisite headers other than postgres.h (or postgres_fe.h
or c.h, as appropriate).
A small number of header files are exempted from this requirement,
-and are whitelisted in the headerscheck script.
+and are allowed in the headerscheck script.
The easy way to run the script is to say "make -s headerscheck" in
the top-level build directory after completing a build. You should
@@ -86,7 +86,7 @@ the project's coding language is C, some people write extensions in C++,
so it's helpful for include files to be C++-clean.
A small number of header files are exempted from this requirement,
-and are whitelisted in the cpluspluscheck script.
+and are allowed in the cpluspluscheck script.
The easy way to run the script is to say "make -s cpluspluscheck" in
the top-level build directory after completing a build. You should
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index 457e328824..06e4d68991 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -54,10 +54,10 @@ $excludes ||= "$code_base/src/tools/pgindent/exclude_file_patterns"
# some names we want to treat like typedefs, e.g. "bool" (which is a macro
# according to <stdbool.h>), and may include some names we don't want
# treated as typedefs, although various headers that some builds include
-# might make them so. For the moment we just hardwire a whitelist of names
-# to add and a blacklist of names to remove; eventually this may need to be
-# easier to configure. Note that the typedefs need trailing newlines.
-my @whitelist = ("bool\n");
+# might make them so. For the moment we just hardwire a list of allowed of
+# names to add and a blacklist of names to remove; eventually this may need to
+# be easier to configure. Note that the typedefs need trailing newlines.
+my @allowed = ("bool\n");
my %blacklist = map { +"$_\n" => 1 } qw(
ANY FD_SET U abs allocfunc boolean date digit ilist interval iterator other
@@ -134,8 +134,8 @@ sub load_typedefs
}
}
- # add whitelisted entries
- push(@typedefs, @whitelist);
+ # add allowed entries
+ push(@typedefs, @allowed);
# remove blacklisted entries
@typedefs = grep { !$blacklist{$_} } @typedefs;
On 6/17/20 6:06 AM, Laurenz Albe wrote:
On Tue, 2020-06-16 at 19:55 -0400, Tom Lane wrote:
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/15/20 2:22 PM, Andres Freund wrote:
2) 'master' as a reference to the branch. Personally I be in favor of
changing the branch name, but it seems like it'd be better done as a
somewhat separate discussion to me, as it affects development
practices to some degree.I'm OK with this, but I would need plenty of notice to get the buildfarm
adjusted."master" is the default branch name established by git, is it not? Not
something we picked. I don't feel like we need to be out front of the
rest of the world in changing that. It'd be a cheaper change than some of
the other proposals in this thread, no doubt, but it would still create
confusion for new hackers who are used to the standard git convention.I have the feeling that all this is going somewhat too far.
First, I +1 the changes Andres proposed overall. In addition to it being
the right thing to do, it brings inline a lot of the terminology we have
been using to describe concepts in PostgreSQL for years (e.g.
primary/replica).
For the name of the git branch, I +1 following the convention of the git
upstream, and make changes based on that. Understandably, it could break
things for a bit, but that will occur for a lot of other projects as
well and everyone will adopt. We have the benefit that we're just
beginning our new development cycle too, so this is a good time to
introduce breaking change ;)
Jonathan
On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred
to as "blacklist" in our codebase, which is the "enum blacklist" (and
then a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name
of the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).
I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.
I know, bikeshedding here.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred
to as "blacklist" in our codebase, which is the "enum blacklist" (and
then a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name
of the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).
I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.
I think worrying about blacklist/whitelist is carrying things a bit far
in the first place.
regards, tom lane
On 6/17/20 11:00 AM, Tom Lane wrote:
Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:
On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred
to as "blacklist" in our codebase, which is the "enum blacklist" (and
then a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name
of the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.I think worrying about blacklist/whitelist is carrying things a bit far
in the first place.
For the small effort and minimal impact involved, I think it's worth
avoiding the bad publicity.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan <
andrew.dunstan@2ndquadrant.com> wrote:
On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred
to as "blacklist" in our codebase, which is the "enum blacklist" (and
then a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name
of the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.I know, bikeshedding here.
I'd be OK with either of those really -- I went with block because it was
the easiest one :)
Not sure the number of characters is the important part :) Banlist does
make sense to me for other reasons though -- it's what it is, isn't it? It
bans those oids from being used in the current session -- I don't think
there's any struggle to "make that sentence work", which means that seems
like the relevant term.
I do think it's worth doing -- it's a small round of changes, and it
doesn't change anything user-exposed, so the cost for us is basically zero.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On 6/17/20 12:08 PM, Magnus Hagander wrote:
On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan
<andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>>
wrote:On 6/17/20 6:32 AM, Magnus Hagander wrote:
In looking at this I realize we also have exactly one thing referred
to as "blacklist" in our codebase, which is the "enum blacklist" (and
then a small internal variable in pgindent). AFAICT, it's not actually
exposed to userspace anywhere, so we could probably make the attached
change to blocklist at no "cost" (the only thing changed is the name
of the hash table, and we definitely change things like that in normal
releases with no specific thought on backwards compat).I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.I know, bikeshedding here.
I'd be OK with either of those really -- I went with block because it
was the easiest one :)Not sure the number of characters is the important part :) Banlist does
make sense to me for other reasons though -- it's what it is, isn't it?
It bans those oids from being used in the current session -- I don't
think there's any struggle to "make that sentence work", which means
that seems like the relevant term.I do think it's worth doing -- it's a small round of changes, and it
doesn't change anything user-exposed, so the cost for us is basically zero.
+1. I know post efforts for us to update our language have been
well-received, even long after the fact, and given this set has been
voiced actively and other fora and, as Magnus states, the cost for us to
change it is basically zero, we should just do it.
Jonathan
On 6/17/20 12:08 PM, Magnus Hagander wrote:
On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan
<andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>>I'm not sure I like doing s/Black/Block/ here. It reads oddly. There are
too many other uses of Block in the sources. Forbidden might be a better
substitution, or Banned maybe. BanList is even less characters than
BlackList.I'd be OK with either of those really -- I went with block because it
was the easiest one :)Not sure the number of characters is the important part :) Banlist does
make sense to me for other reasons though -- it's what it is, isn't it?
It bans those oids from being used in the current session -- I don't
think there's any struggle to "make that sentence work", which means
that seems like the relevant term.
I've seen also seen allowList/denyList as an alternative. I do agree
that blockList is a bit confusing since we often use block in a very
different context.
I do think it's worth doing -- it's a small round of changes, and it
doesn't change anything user-exposed, so the cost for us is basically zero.
+1
Regards,
--
-David
david@pgmasters.net
On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:
0002: code: s/master/primary/
0003: code: s/master/leader/
0006: docs: s/master/root/
0007: docs: s/master/supervisor/
I'd just like to make the pointer here that there's value in trying to
use different terminology for different things. I picked "leader" and
"worker" for parallel query and tried to use them consistently because
"master" and "slave" were being used widely to refer to physical
replication, and I thought it would be clearer to use something
different, so I did. It's confusing if we use the same word for the
server from which others replicate, the table from which others
inherit, the process which initiates parallelism, and the first
process that is launched across the whole cluster, regardless of
*which* word we use for those things. So, I think there is every
possibility that with careful thought, we can actually make things
clearer, in addition to avoiding the use of terms that are no longer
welcome.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:
0002: code: s/master/primary/
0003: code: s/master/leader/
0006: docs: s/master/root/
0007: docs: s/master/supervisor/
I'd just like to make the pointer here that there's value in trying to
use different terminology for different things.
+1 for that.
regards, tom lane
Hi,
On 2020-06-17 13:59:26 -0400, Robert Haas wrote:
On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:
0002: code: s/master/primary/
0003: code: s/master/leader/
0006: docs: s/master/root/
0007: docs: s/master/supervisor/I'd just like to make the pointer here that there's value in trying to
use different terminology for different things. I picked "leader" and
"worker" for parallel query and tried to use them consistently because
"master" and "slave" were being used widely to refer to physical
replication, and I thought it would be clearer to use something
different, so I did.
Just to be clear, that's exactly what I tried to do in the above
patches. E.g. in 0003 I tried to follow the scheme you just
outlined. There's a part of that patch that addresses pg_dump, but most
of the rest is just parallelism related pieces that ended up using
master, even though leader is the more widely used term. I assume you
were just saying that the above use of different terms is actually
helpful:
It's confusing if we use the same word for the server from which
others replicate, the table from which others inherit, the process
which initiates parallelism, and the first process that is launched
across the whole cluster, regardless of *which* word we use for those
things. So, I think there is every possibility that with careful
thought, we can actually make things clearer, in addition to avoiding
the use of terms that are no longer welcome.
With which I wholeheartedly agree.
Greetings,
Andres Freund
On Wed, Jun 17, 2020 at 01:59:26PM -0400, Robert Haas wrote:
On Mon, Jun 15, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:
0002: code: s/master/primary/
0003: code: s/master/leader/
0006: docs: s/master/root/
0007: docs: s/master/supervisor/I'd just like to make the pointer here that there's value in trying to
use different terminology for different things. I picked "leader" and
"worker" for parallel query and tried to use them consistently because
"master" and "slave" were being used widely to refer to physical
replication, and I thought it would be clearer to use something
different, so I did. It's confusing if we use the same word for the
server from which others replicate, the table from which others
inherit, the process which initiates parallelism, and the first
process that is launched across the whole cluster, regardless of
*which* word we use for those things. So, I think there is every
possibility that with careful thought, we can actually make things
clearer, in addition to avoiding the use of terms that are no longer
welcome.
I think the question is whether we can improve our terms as part of this
rewording, or if we make them worse. When we got rid of slave and made
it standby, I think we made things worse since many of the replicas were
not functioning for the purpose of standby. Standby is a role, not a
status, while replica is a status.
The other issue is how the terms interlink with other terms. When we
used master/slave, multi-master matched the wording, but replication
didn't match. If we go with replica, replication works, and
primary/replica kind of fits, e.g., master/replica does not.
Multi-master then no longer fits, multi-primary sounds odd, and
active-active doesn't match, though active-active is not used as much as
primary/replica, so maybe that is OK. Ideally we would have all terms
matching, but maybe that is impossible.
My point is that these terms are symbolic (similes) --- the new terms
should link to their roles and to other terms in a logical way.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
Hi,
I've pushed most of the changes.
On 2020-06-16 18:59:25 -0400, David Steele wrote:
On 6/16/20 6:27 PM, Andres Freund wrote:
On 2020-06-16 17:14:57 -0400, David Steele wrote:
On 6/15/20 2:22 PM, Andres Freund wrote:
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.Why not multi-primary?
My understanding of primary is that there really can't be two things
that are primary in relation to each other.Well, I think the same is true for multi-master and that's pretty common.
active/active is probably
the most common term in use besides multi-master.Works for me and can always be updated later if we come up with something
better. At least active-active will be easier to search for.
What do you think about the attached?
Greetings,
Andres Freund
Attachments:
0001-docs-use-active-active-instead-of-multi-master-repli.patchtext/x-diff; charset=us-asciiDownload
From 254c1f45beb8e9dee840a497d5c040b00015a8f3 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 15 Jun 2020 10:39:15 -0700
Subject: [PATCH] docs: use active-active instead of multi-master replication.
Author: Andres Freund
Reviewed-By: David Steele
Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue@alap3.anarazel.de
---
doc/src/sgml/high-availability.sgml | 34 +++++++++++++++--------------
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 6a9184f314e..2e090419f18 100644
--- a/doc/src/sgml/high-availability.sgml
+++ b/doc/src/sgml/high-availability.sgml
@@ -38,12 +38,14 @@
<para>
Some solutions deal with synchronization by allowing only one
server to modify the data. Servers that can modify data are
- called read/write, <firstterm>master</firstterm> or <firstterm>primary</firstterm> servers.
- Servers that track changes in the master are called <firstterm>standby</firstterm>
- or <firstterm>secondary</firstterm> servers. A standby server that cannot be connected
- to until it is promoted to a master server is called a <firstterm>warm
- standby</firstterm> server, and one that can accept connections and serves read-only
- queries is called a <firstterm>hot standby</firstterm> server.
+ called read/write, <firstterm>primary</firstterm>,
+ <firstterm>active</firstterm> (or <firstterm>master</firstterm>) servers.
+ Servers that track changes in the primary are called
+ <firstterm>standby</firstterm> or <firstterm>secondary</firstterm> servers.
+ A standby server that cannot be connected to until it is promoted to a
+ primary server is called a <firstterm>warm standby</firstterm> server, and
+ one that can accept connections and serves read-only queries is called a
+ <firstterm>hot standby</firstterm> server.
</para>
<para>
@@ -177,14 +179,14 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Trigger-Based Master-Standby Replication</term>
+ <term>Trigger-Based Primary-Standby Replication</term>
<listitem>
<para>
- A master-standby replication setup sends all data modification
- queries to the master server. The master server asynchronously
+ A primary-standby replication setup sends all data modification
+ queries to the primary server. The primary server asynchronously
sends data changes to the standby server. The standby can answer
- read-only queries while the master server is running. The
+ read-only queries while the primary server is running. The
standby server is ideal for data warehouse queries.
</para>
@@ -233,14 +235,14 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Asynchronous Multimaster Replication</term>
+ <term>Asynchronous Active-Active Replication</term>
<listitem>
<para>
For servers that are not regularly connected or have slow
communication links, like laptops or
remote servers, keeping data consistent among servers is a
- challenge. Using asynchronous multimaster replication, each
+ challenge. Using asynchronous active-active replication, each
server works independently, and periodically communicates with
the other servers to identify conflicting transactions. The
conflicts can be resolved by users or conflict resolution rules.
@@ -250,17 +252,17 @@ protocol to make nodes agree on a serializable transactional order.
</varlistentry>
<varlistentry>
- <term>Synchronous Multimaster Replication</term>
+ <term>Synchronous Active-Active Replication</term>
<listitem>
<para>
- In synchronous multimaster replication, each server can accept
+ In synchronous active-active replication, each server can accept
write requests, and modified data is transmitted from the
original server to every other server before each transaction
commits. Heavy write activity can cause excessive locking and
commit delays, leading to poor performance. Read requests can
be sent to any server. Some implementations use shared disk
- to reduce the communication overhead. Synchronous multimaster
+ to reduce the communication overhead. Synchronous active-active
replication is best for mostly read workloads, though its big
advantage is that any server can accept write requests —
there is no need to partition workloads between primary and
@@ -351,7 +353,7 @@ protocol to make nodes agree on a serializable transactional order.
</row>
<row>
- <entry>Allows multiple master servers</entry>
+ <entry>Allows multiple primary servers</entry>
<entry align="center"></entry>
<entry align="center"></entry>
<entry align="center"></entry>
--
2.25.0.114.g5b0ca878e0
On 7/8/20 4:39 PM, Andres Freund wrote:
Hi,
I've pushed most of the changes.
On 2020-06-16 18:59:25 -0400, David Steele wrote:
On 6/16/20 6:27 PM, Andres Freund wrote:
On 2020-06-16 17:14:57 -0400, David Steele wrote:
On 6/15/20 2:22 PM, Andres Freund wrote:
0008: docs: WIP multi-master rephrasing.
I like neither the new nor the old language much. I'd welcome input.Why not multi-primary?
My understanding of primary is that there really can't be two things
that are primary in relation to each other.Well, I think the same is true for multi-master and that's pretty common.
active/active is probably
the most common term in use besides multi-master.Works for me and can always be updated later if we come up with something
better. At least active-active will be easier to search for.What do you think about the attached?
I think this phrasing in the original/updated version is pretty awkward:
+ A standby server that cannot be connected to until it is promoted to a
+ primary server is called a ...
How about:
+ A standby server that must be promoted to a primary server before
+ accepting connections is called a ...
Other than that it looks good to me.
Regards,
--
-David
david@pgmasters.net
On 2020-Jul-08, David Steele wrote:
On 7/8/20 4:39 PM, Andres Freund wrote:
I think this phrasing in the original/updated version is pretty awkward:
+ A standby server that cannot be connected to until it is promoted to a + primary server is called a ...
Yeah.
How about:
+ A standby server that must be promoted to a primary server before + accepting connections is called a ...
How about just reducing it to "A standby server that doesn't accept
connection is called a ..."? We don't really need to explain that if
you do promote the standby it will start accept connections -- do we?
It should be pretty obvious if you promote a standby, it will cease to
be a standby in the first place. This verbiage seems excessive.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 7/8/20 5:17 PM, Alvaro Herrera wrote:
On 2020-Jul-08, David Steele wrote:
On 7/8/20 4:39 PM, Andres Freund wrote:
I think this phrasing in the original/updated version is pretty awkward:
+ A standby server that cannot be connected to until it is promoted to a + primary server is called a ...Yeah.
How about:
+ A standby server that must be promoted to a primary server before + accepting connections is called a ...How about just reducing it to "A standby server that doesn't accept
connection is called a ..."? We don't really need to explain that if
you do promote the standby it will start accept connections -- do we?
It should be pretty obvious if you promote a standby, it will cease to
be a standby in the first place. This verbiage seems excessive.
Works for me.
Regards,
--
-David
david@pgmasters.net
On Wed, Jun 17, 2020 at 9:27 AM David Steele <david@pgmasters.net> wrote:
On 6/17/20 12:08 PM, Magnus Hagander wrote:
On Wed, Jun 17, 2020 at 4:15 PM Andrew Dunstan
<andrew.dunstan@2ndquadrant.com <mailto:andrew.dunstan@2ndquadrant.com>>I'm not sure I like doing s/Black/Block/ here. It reads oddly. There
are
too many other uses of Block in the sources. Forbidden might be a
better
substitution, or Banned maybe. BanList is even less characters than
BlackList.I'd be OK with either of those really -- I went with block because it
was the easiest one :)Not sure the number of characters is the important part :) Banlist does
make sense to me for other reasons though -- it's what it is, isn't it?
It bans those oids from being used in the current session -- I don't
think there's any struggle to "make that sentence work", which means
that seems like the relevant term.I've seen also seen allowList/denyList as an alternative. I do agree
that blockList is a bit confusing since we often use block in a very
different context.
+1 for allowList/denyList as alternative
I do think it's worth doing -- it's a small round of changes, and it
doesn't change anything user-exposed, so the cost for us is basically
zero.
+1
Agree number of occurrences for whitelist and blacklist are not many, so
cleaning these would be helpful and patches already proposed for it
git grep whitelist | wc -l
10
git grep blacklist | wc -l
40
Thanks a lot for language cleanups. Greenplum, fork of PostgreSQL, wishes
to perform similar cleanups and upstream doing it really helps us
downstream.
On Wed, Jun 17, 2020 at 10:32 PM Magnus Hagander <magnus@hagander.net> wrote:
In looking at this I realize we also have exactly one thing referred to as "blacklist" in our codebase, which is the "enum blacklist" (and then a small internal variable in pgindent). AFAICT, it's not actually exposed to userspace anywhere, so we could probably make the attached change to blocklist at no "cost" (the only thing changed is the name of the hash table, and we definitely change things like that in normal releases with no specific thought on backwards compat).
+1
Hmm, can we find a more descriptive name for this mechanism? What
about calling it the "uncommitted enum table"? See attached.
Attachments:
0001-Rename-enum-blacklist-to-uncommitted-enum-table.patchtext/x-patch; charset=US-ASCII; name=0001-Rename-enum-blacklist-to-uncommitted-enum-table.patchDownload
From 39dfbc83691ee9f48ea844172e38231740674aed Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Thu, 22 Oct 2020 10:09:55 +1300
Subject: [PATCH] Rename "enum blacklist" to "uncommitted enum table".
Internal data structures relating to uncommitted enum values are better
described that way, and the earlier term is now unpopular in the
software industry for its (unintended) connotations.
Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue%40alap3.anarazel.de
---
src/backend/access/transam/parallel.c | 32 ++++++-------
src/backend/catalog/pg_enum.c | 65 ++++++++++++++-------------
src/backend/utils/adt/enum.c | 4 +-
src/include/catalog/pg_enum.h | 8 ++--
4 files changed, 56 insertions(+), 53 deletions(-)
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index b0426960c7..1f499b5131 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -75,7 +75,7 @@
#define PARALLEL_KEY_PENDING_SYNCS UINT64CONST(0xFFFFFFFFFFFF000B)
#define PARALLEL_KEY_REINDEX_STATE UINT64CONST(0xFFFFFFFFFFFF000C)
#define PARALLEL_KEY_RELMAPPER_STATE UINT64CONST(0xFFFFFFFFFFFF000D)
-#define PARALLEL_KEY_ENUMBLACKLIST UINT64CONST(0xFFFFFFFFFFFF000E)
+#define PARALLEL_KEY_UNCOMMITTEDENUMTABLE UINT64CONST(0xFFFFFFFFFFFF000E)
/* Fixed-size parallel state. */
typedef struct FixedParallelState
@@ -211,7 +211,7 @@ InitializeParallelDSM(ParallelContext *pcxt)
Size pendingsyncslen = 0;
Size reindexlen = 0;
Size relmapperlen = 0;
- Size enumblacklistlen = 0;
+ Size uncommittedenumtablelen = 0;
Size segsize = 0;
int i;
FixedParallelState *fps;
@@ -267,8 +267,8 @@ InitializeParallelDSM(ParallelContext *pcxt)
shm_toc_estimate_chunk(&pcxt->estimator, reindexlen);
relmapperlen = EstimateRelationMapSpace();
shm_toc_estimate_chunk(&pcxt->estimator, relmapperlen);
- enumblacklistlen = EstimateEnumBlacklistSpace();
- shm_toc_estimate_chunk(&pcxt->estimator, enumblacklistlen);
+ uncommittedenumtablelen = EstimateUncommittedEnumTableSpace();
+ shm_toc_estimate_chunk(&pcxt->estimator, uncommittedenumtablelen);
/* If you add more chunks here, you probably need to add keys. */
shm_toc_estimate_keys(&pcxt->estimator, 11);
@@ -348,7 +348,7 @@ InitializeParallelDSM(ParallelContext *pcxt)
char *error_queue_space;
char *session_dsm_handle_space;
char *entrypointstate;
- char *enumblacklistspace;
+ char *uncommittedenumtablespace;
Size lnamelen;
/* Serialize shared libraries we have loaded. */
@@ -404,11 +404,13 @@ InitializeParallelDSM(ParallelContext *pcxt)
shm_toc_insert(pcxt->toc, PARALLEL_KEY_RELMAPPER_STATE,
relmapperspace);
- /* Serialize enum blacklist state. */
- enumblacklistspace = shm_toc_allocate(pcxt->toc, enumblacklistlen);
- SerializeEnumBlacklist(enumblacklistspace, enumblacklistlen);
- shm_toc_insert(pcxt->toc, PARALLEL_KEY_ENUMBLACKLIST,
- enumblacklistspace);
+ /* Serialize uncommitted enum state. */
+ uncommittedenumtablespace = shm_toc_allocate(pcxt->toc,
+ uncommittedenumtablelen);
+ SerializeUncommittedEnumTable(uncommittedenumtablespace,
+ uncommittedenumtablelen);
+ shm_toc_insert(pcxt->toc, PARALLEL_KEY_UNCOMMITTEDENUMTABLE,
+ uncommittedenumtablespace);
/* Allocate space for worker information. */
pcxt->worker = palloc0(sizeof(ParallelWorkerInfo) * pcxt->nworkers);
@@ -1257,7 +1259,7 @@ ParallelWorkerMain(Datum main_arg)
char *pendingsyncsspace;
char *reindexspace;
char *relmapperspace;
- char *enumblacklistspace;
+ char *uncommittedenumtablespace;
StringInfoData msgbuf;
char *session_dsm_handle_space;
@@ -1449,10 +1451,10 @@ ParallelWorkerMain(Datum main_arg)
relmapperspace = shm_toc_lookup(toc, PARALLEL_KEY_RELMAPPER_STATE, false);
RestoreRelationMap(relmapperspace);
- /* Restore enum blacklist. */
- enumblacklistspace = shm_toc_lookup(toc, PARALLEL_KEY_ENUMBLACKLIST,
- false);
- RestoreEnumBlacklist(enumblacklistspace);
+ /* Restore uncommitted enum table. */
+ uncommittedenumtablespace =
+ shm_toc_lookup(toc,PARALLEL_KEY_UNCOMMITTEDENUMTABLE, false);
+ RestoreUncommittedEnumTable(uncommittedenumtablespace);
/* Attach to the leader's serializable transaction, if SERIALIZABLE. */
AttachSerializableXact(fps->serializable_xact_handle);
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index 27e4100a6f..749b9dbee1 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -41,10 +41,11 @@ Oid binary_upgrade_next_pg_enum_oid = InvalidOid;
* committed; otherwise, they might get into indexes where we can't clean
* them up, and then if the transaction rolls back we have a broken index.
* (See comments for check_safe_enum_use() in enum.c.) Values created by
- * EnumValuesCreate are *not* blacklisted; we assume those are created during
- * CREATE TYPE, so they can't go away unless the enum type itself does.
+ * EnumValuesCreate are *not* entered into the table; we assume those are
+ * created during CREATE TYPE, so they can't go away unless the enum type
+ * itself does.
*/
-static HTAB *enum_blacklist = NULL;
+static HTAB *uncommitted_enum_table = NULL;
static void RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems);
static int sort_order_cmp(const void *p1, const void *p2);
@@ -181,10 +182,10 @@ EnumValuesDelete(Oid enumTypeOid)
}
/*
- * Initialize the enum blacklist for this transaction.
+ * Initialize the uncommitted enum table for this transaction.
*/
static void
-init_enum_blacklist(void)
+init_uncommitted_enum_table(void)
{
HASHCTL hash_ctl;
@@ -192,10 +193,10 @@ init_enum_blacklist(void)
hash_ctl.keysize = sizeof(Oid);
hash_ctl.entrysize = sizeof(Oid);
hash_ctl.hcxt = TopTransactionContext;
- enum_blacklist = hash_create("Enum value blacklist",
- 32,
- &hash_ctl,
- HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ uncommitted_enum_table = hash_create("Uncommitted enum table",
+ 32,
+ &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
}
/*
@@ -491,12 +492,12 @@ restart:
table_close(pg_enum, RowExclusiveLock);
- /* Set up the blacklist hash if not already done in this transaction */
- if (enum_blacklist == NULL)
- init_enum_blacklist();
+ /* Set up the enum table if not already done in this transaction */
+ if (uncommitted_enum_table == NULL)
+ init_uncommitted_enum_table();
- /* Add the new value to the blacklist */
- (void) hash_search(enum_blacklist, &newOid, HASH_ENTER, NULL);
+ /* Add the new value to the table */
+ (void) hash_search(uncommitted_enum_table, &newOid, HASH_ENTER, NULL);
}
@@ -585,19 +586,19 @@ RenameEnumLabel(Oid enumTypeOid,
/*
- * Test if the given enum value is on the blacklist
+ * Test if the given enum value is in the table of blocked enums.
*/
bool
-EnumBlacklisted(Oid enum_id)
+EnumUncommitted(Oid enum_id)
{
bool found;
- /* If we've made no blacklist table, all values are safe */
- if (enum_blacklist == NULL)
+ /* If we've made no uncommitted table, all values are safe */
+ if (uncommitted_enum_table == NULL)
return false;
/* Else, is it in the table? */
- (void) hash_search(enum_blacklist, &enum_id, HASH_FIND, &found);
+ (void) hash_search(uncommitted_enum_table, &enum_id, HASH_FIND, &found);
return found;
}
@@ -609,11 +610,11 @@ void
AtEOXact_Enum(void)
{
/*
- * Reset the blacklist table, as all our enum values are now committed.
+ * Reset the uncommitted table, as all our enum values are now committed.
* The memory will go away automatically when TopTransactionContext is
* freed; it's sufficient to clear our pointer.
*/
- enum_blacklist = NULL;
+ uncommitted_enum_table = NULL;
}
@@ -692,12 +693,12 @@ sort_order_cmp(const void *p1, const void *p2)
}
Size
-EstimateEnumBlacklistSpace(void)
+EstimateUncommittedEnumTableSpace(void)
{
size_t entries;
- if (enum_blacklist)
- entries = hash_get_num_entries(enum_blacklist);
+ if (uncommitted_enum_table)
+ entries = hash_get_num_entries(uncommitted_enum_table);
else
entries = 0;
@@ -706,7 +707,7 @@ EstimateEnumBlacklistSpace(void)
}
void
-SerializeEnumBlacklist(void *space, Size size)
+SerializeUncommittedEnumTable(void *space, Size size)
{
Oid *serialized = (Oid *) space;
@@ -714,15 +715,15 @@ SerializeEnumBlacklist(void *space, Size size)
* Make sure the hash table hasn't changed in size since the caller
* reserved the space.
*/
- Assert(size == EstimateEnumBlacklistSpace());
+ Assert(size == EstimateUncommittedEnumTableSpace());
/* Write out all the values from the hash table, if there is one. */
- if (enum_blacklist)
+ if (uncommitted_enum_table)
{
HASH_SEQ_STATUS status;
Oid *value;
- hash_seq_init(&status, enum_blacklist);
+ hash_seq_init(&status, uncommitted_enum_table);
while ((value = (Oid *) hash_seq_search(&status)))
*serialized++ = *value;
}
@@ -738,11 +739,11 @@ SerializeEnumBlacklist(void *space, Size size)
}
void
-RestoreEnumBlacklist(void *space)
+RestoreUncommittedEnumTable(void *space)
{
Oid *serialized = (Oid *) space;
- Assert(!enum_blacklist);
+ Assert(!uncommitted_enum_table);
/*
* As a special case, if the list is empty then don't even bother to
@@ -753,9 +754,9 @@ RestoreEnumBlacklist(void *space)
return;
/* Read all the values into a new hash table. */
- init_enum_blacklist();
+ init_uncommitted_enum_table();
do
{
- hash_search(enum_blacklist, serialized++, HASH_ENTER, NULL);
+ hash_search(uncommitted_enum_table, serialized++, HASH_ENTER, NULL);
} while (OidIsValid(*serialized));
}
diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c
index 5ead794e34..ed7da7e007 100644
--- a/src/backend/utils/adt/enum.c
+++ b/src/backend/utils/adt/enum.c
@@ -83,12 +83,12 @@ check_safe_enum_use(HeapTuple enumval_tup)
return;
/*
- * Check if the enum value is blacklisted. If not, it's safe, because it
+ * Check if the enum value is uncommitted. If not, it's safe, because it
* was made during CREATE TYPE AS ENUM and can't be shorter-lived than its
* owning type. (This'd also be false for values made by other
* transactions; but the previous tests should have handled all of those.)
*/
- if (!EnumBlacklisted(en->oid))
+ if (!EnumUncommitted(en->oid))
return;
/*
diff --git a/src/include/catalog/pg_enum.h b/src/include/catalog/pg_enum.h
index b28d441ba7..fd90365f45 100644
--- a/src/include/catalog/pg_enum.h
+++ b/src/include/catalog/pg_enum.h
@@ -53,10 +53,10 @@ extern void AddEnumLabel(Oid enumTypeOid, const char *newVal,
bool skipIfExists);
extern void RenameEnumLabel(Oid enumTypeOid,
const char *oldVal, const char *newVal);
-extern bool EnumBlacklisted(Oid enum_id);
-extern Size EstimateEnumBlacklistSpace(void);
-extern void SerializeEnumBlacklist(void *space, Size size);
-extern void RestoreEnumBlacklist(void *space);
+extern bool EnumUncommitted(Oid enum_id);
+extern Size EstimateUncommittedEnumTableSpace(void);
+extern void SerializeUncommittedEnumTable(void *space, Size size);
+extern void RestoreUncommittedEnumTable(void *space);
extern void AtEOXact_Enum(void);
#endif /* PG_ENUM_H */
--
2.20.1
On Wed, Oct 21, 2020 at 11:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:
On Wed, Jun 17, 2020 at 10:32 PM Magnus Hagander <magnus@hagander.net> wrote:
In looking at this I realize we also have exactly one thing referred to as "blacklist" in our codebase, which is the "enum blacklist" (and then a small internal variable in pgindent). AFAICT, it's not actually exposed to userspace anywhere, so we could probably make the attached change to blocklist at no "cost" (the only thing changed is the name of the hash table, and we definitely change things like that in normal releases with no specific thought on backwards compat).
+1
Hmm, can we find a more descriptive name for this mechanism? What
about calling it the "uncommitted enum table"? See attached.
Thanks for picking this one up again!
Agreed, that's a much better choice.
The term itself is a bit of a mouthful, but it does describe what it
is in a much more clear way than what the old term did anyway.
Maybe consider just calling it "uncomitted enums", which would as a
bonus have it not end up talking about uncommittedenumtablespace which
gets hits on searches for tablespace.. Though I'm not sure that's
important.
I'm +1 to the change with or without that adjustment.
As for the code, I note that:
+ /* Set up the enum table if not already done in this transaction */
forgets to say it's *uncomitted* enum table -- which is the important
part of I believe.
And
+ * Test if the given enum value is in the table of blocked enums.
should probably talk about uncommitted rather than blocked?
--
Magnus Hagander
Me: https://www.hagander.net/
Work: https://www.redpill-linpro.com/
On Wed, Nov 4, 2020 at 4:10 AM Magnus Hagander <magnus@hagander.net> wrote:
On Wed, Oct 21, 2020 at 11:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:
Hmm, can we find a more descriptive name for this mechanism? What
about calling it the "uncommitted enum table"? See attached.Thanks for picking this one up again!
Agreed, that's a much better choice.
The term itself is a bit of a mouthful, but it does describe what it
is in a much more clear way than what the old term did anyway.Maybe consider just calling it "uncomitted enums", which would as a
bonus have it not end up talking about uncommittedenumtablespace which
gets hits on searches for tablespace.. Though I'm not sure that's
important.I'm +1 to the change with or without that adjustment.
Cool. I went with your suggestion.
As for the code, I note that:
+ /* Set up the enum table if not already done in this transaction */forgets to say it's *uncomitted* enum table -- which is the important
part of I believe.
Fixed.
And
+ * Test if the given enum value is in the table of blocked enums.should probably talk about uncommitted rather than blocked?
Fixed.
And pushed.
Magnus Hagander <magnus@hagander.net> writes:
In looking at this I realize we also have exactly one thing referred to as
"blacklist" in our codebase, which is the "enum blacklist" (and then a
small internal variable in pgindent).
Here's a patch that renames the @whitelist and %blacklist variables in
pgindent to @additional and %excluded, and adjusts the comments to
match.
- ilmari
--
"The surreality of the universe tends towards a maximum" -- Skud's Law
"Never formulate a law or axiom that you're not prepared to live with
the consequences of." -- Skud's Meta-Law
Attachments:
0001-Rename-whitelist-blacklist-in-pgindent-to-additional.patchtext/x-diffDownload
From 6525826bdf87ce02bd0a1648f94c7122290907f6 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilmari@ilmari.org>
Date: Tue, 5 Jan 2021 00:10:07 +0000
Subject: [PATCH] Rename whitelist/blacklist in pgindent to additional/excluded
---
src/tools/pgindent/pgindent | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index 4124d27dea..feb02067c5 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -54,12 +54,12 @@ $excludes ||= "$code_base/src/tools/pgindent/exclude_file_patterns"
# some names we want to treat like typedefs, e.g. "bool" (which is a macro
# according to <stdbool.h>), and may include some names we don't want
# treated as typedefs, although various headers that some builds include
-# might make them so. For the moment we just hardwire a whitelist of names
-# to add and a blacklist of names to remove; eventually this may need to be
+# might make them so. For the moment we just hardwire a list of names
+# to add and a list of names to exclude; eventually this may need to be
# easier to configure. Note that the typedefs need trailing newlines.
-my @whitelist = ("bool\n");
+my @additional = ("bool\n");
-my %blacklist = map { +"$_\n" => 1 } qw(
+my %excluded = map { +"$_\n" => 1 } qw(
ANY FD_SET U abs allocfunc boolean date digit ilist interval iterator other
pointer printfunc reference string timestamp type wrap
);
@@ -134,11 +134,11 @@ sub load_typedefs
}
}
- # add whitelisted entries
- push(@typedefs, @whitelist);
+ # add additional entries
+ push(@typedefs, @additional);
- # remove blacklisted entries
- @typedefs = grep { !$blacklist{$_} } @typedefs;
+ # remove excluded entries
+ @typedefs = grep { !$excluded{$_} } @typedefs;
# write filtered typedefs
my $filter_typedefs_fh = new File::Temp(TEMPLATE => "pgtypedefXXXXX");
--
2.29.2
On Tue, Jan 5, 2021 at 1:12 PM Dagfinn Ilmari Mannsåker
<ilmari@ilmari.org> wrote:
Magnus Hagander <magnus@hagander.net> writes:
In looking at this I realize we also have exactly one thing referred to as
"blacklist" in our codebase, which is the "enum blacklist" (and then a
small internal variable in pgindent).Here's a patch that renames the @whitelist and %blacklist variables in
pgindent to @additional and %excluded, and adjusts the comments to
match.
Pushed. Thanks!
Thomas Munro <thomas.munro@gmail.com> writes:
On Tue, Jan 5, 2021 at 1:12 PM Dagfinn Ilmari Mannsåker
<ilmari@ilmari.org> wrote:Magnus Hagander <magnus@hagander.net> writes:
In looking at this I realize we also have exactly one thing referred to as
"blacklist" in our codebase, which is the "enum blacklist" (and then a
small internal variable in pgindent).Here's a patch that renames the @whitelist and %blacklist variables in
pgindent to @additional and %excluded, and adjusts the comments to
match.Pushed. Thanks!
Thanks! Just after sending that, I thought to grep for "white\W*list"
as well, and found a few more occurrences that were trivially reworded,
per the attached patch.
- ilmari
--
"A disappointingly low fraction of the human race is,
at any given time, on fire." - Stig Sandbeck Mathisen
Attachments:
0001-Replace-remaining-uses-of-whitelist.patchtext/x-diffDownload
From 43e9c60bac7b1702e5be2362a439f67adc8a5e06 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilmari@ilmari.org>
Date: Tue, 5 Jan 2021 00:20:49 +0000
Subject: [PATCH] Replace remaining uses of "whitelist"
Instead describe the action that the list effects, or just use "list"
where the meaning is obvious from context.
---
contrib/postgres_fdw/postgres_fdw.h | 2 +-
contrib/postgres_fdw/shippable.c | 4 ++--
src/backend/access/hash/hashvalidate.c | 2 +-
src/backend/utils/adt/lockfuncs.c | 2 +-
src/tools/pginclude/README | 4 ++--
5 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/contrib/postgres_fdw/postgres_fdw.h b/contrib/postgres_fdw/postgres_fdw.h
index 277a30f500..19ea27a1bc 100644
--- a/contrib/postgres_fdw/postgres_fdw.h
+++ b/contrib/postgres_fdw/postgres_fdw.h
@@ -77,7 +77,7 @@ typedef struct PgFdwRelationInfo
bool use_remote_estimate;
Cost fdw_startup_cost;
Cost fdw_tuple_cost;
- List *shippable_extensions; /* OIDs of whitelisted extensions */
+ List *shippable_extensions; /* OIDs of shippable extensions */
/* Cached catalog information. */
ForeignTable *table;
diff --git a/contrib/postgres_fdw/shippable.c b/contrib/postgres_fdw/shippable.c
index c43e7e5ec5..b27f82e015 100644
--- a/contrib/postgres_fdw/shippable.c
+++ b/contrib/postgres_fdw/shippable.c
@@ -7,7 +7,7 @@
* data types are shippable to a remote server for execution --- that is,
* do they exist and have the same behavior remotely as they do locally?
* Built-in objects are generally considered shippable. Other objects can
- * be shipped if they are white-listed by the user.
+ * be shipped if they are declared as such by the user.
*
* Note: there are additional filter rules that prevent shipping mutable
* functions or functions using nonportable collations. Those considerations
@@ -110,7 +110,7 @@ InitializeShippableCache(void)
*
* Right now "shippability" is exclusively a function of whether the object
* belongs to an extension declared by the user. In the future we could
- * additionally have a whitelist of functions/operators declared one at a time.
+ * additionally have a list of functions/operators declared one at a time.
*/
static bool
lookup_shippable(Oid objectId, Oid classId, PgFdwRelationInfo *fpinfo)
diff --git a/src/backend/access/hash/hashvalidate.c b/src/backend/access/hash/hashvalidate.c
index 8462540017..1e343df0af 100644
--- a/src/backend/access/hash/hashvalidate.c
+++ b/src/backend/access/hash/hashvalidate.c
@@ -312,7 +312,7 @@ check_hash_func_signature(Oid funcid, int16 amprocnum, Oid argtype)
* that are different from but physically compatible with the opclass
* datatype. In some of these cases, even a "binary coercible" check
* fails because there's no relevant cast. For the moment, fix it by
- * having a whitelist of allowed cases. Test the specific function
+ * having a list of allowed cases. Test the specific function
* identity, not just its input type, because hashvarlena() takes
* INTERNAL and allowing any such function seems too scary.
*/
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index 9f2c4946c9..0db8be6c91 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -644,7 +644,7 @@ pg_isolation_test_session_is_blocked(PG_FUNCTION_ARGS)
/*
* Check if blocked_pid is waiting for a safe snapshot. We could in
* theory check the resulting array of blocker PIDs against the
- * interesting PIDs whitelist, but since there is no danger of autovacuum
+ * interesting PIDs list, but since there is no danger of autovacuum
* blocking GetSafeSnapshot there seems to be no point in expending cycles
* on allocating a buffer and searching for overlap; so it's presently
* sufficient for the isolation tester's purposes to use a single element
diff --git a/src/tools/pginclude/README b/src/tools/pginclude/README
index a067c7f472..49eb4b6907 100644
--- a/src/tools/pginclude/README
+++ b/src/tools/pginclude/README
@@ -64,7 +64,7 @@ with no prerequisite headers other than postgres.h (or postgres_fe.h
or c.h, as appropriate).
A small number of header files are exempted from this requirement,
-and are whitelisted in the headerscheck script.
+and are skipped by the headerscheck script.
The easy way to run the script is to say "make -s headerscheck" in
the top-level build directory after completing a build. You should
@@ -86,7 +86,7 @@ the project's coding language is C, some people write extensions in C++,
so it's helpful for include files to be C++-clean.
A small number of header files are exempted from this requirement,
-and are whitelisted in the cpluspluscheck script.
+and are skipped by the cpluspluscheck script.
The easy way to run the script is to say "make -s cpluspluscheck" in
the top-level build directory after completing a build. You should
--
2.29.2