Use "WAL segment" instead of "log segment" consistently in user-facing messages

Started by Bharath Rupireddyalmost 4 years ago19 messages
#1Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
1 attachment(s)

Hi,

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages [1]pg_log_error("could not fetch WAL segment size: got %d rows and %d fields, expected %d rows and %d or more fields", pg_log_error("WAL segment size could not be parsed"); pg_log_error(ngettext("WAL segment size must be a power of two between 1 MB and 1 GB, but the remote server reported a value of %d byte", printf(_("WARNING: invalid WAL segment size\n")); printf(_("Bytes per WAL segment: %u\n"), fatal_error(ngettext("WAL segment size must be a power of two between 1 MB and 1 GB, but the WAL file \"%s\" header specifies %d byte", errmsg("requested WAL segment %s has already been removed", elog(DEBUG2, "removed temporary WAL segment \"%s\"", path);.

Here's a small patch attempting to consistently use the "WAL segment".

Thoughts?

[1]: pg_log_error("could not fetch WAL segment size: got %d rows and %d fields, expected %d rows and %d or more fields", pg_log_error("WAL segment size could not be parsed"); pg_log_error(ngettext("WAL segment size must be a power of two between 1 MB and 1 GB, but the remote server reported a value of %d byte", printf(_("WARNING: invalid WAL segment size\n")); printf(_("Bytes per WAL segment: %u\n"), fatal_error(ngettext("WAL segment size must be a power of two between 1 MB and 1 GB, but the WAL file \"%s\" header specifies %d byte", errmsg("requested WAL segment %s has already been removed", elog(DEBUG2, "removed temporary WAL segment \"%s\"", path);
pg_log_error("could not fetch WAL segment size: got %d rows and %d
fields, expected %d rows and %d or more fields",
pg_log_error("WAL segment size could not be parsed");
pg_log_error(ngettext("WAL segment size must be a power of two between
1 MB and 1 GB, but the remote server reported a value of %d byte",
printf(_("WARNING: invalid WAL segment size\n"));
printf(_("Bytes per WAL segment: %u\n"),
fatal_error(ngettext("WAL segment size must be a power of two between
1 MB and 1 GB, but the WAL file \"%s\" header specifies %d byte",
errmsg("requested WAL segment %s has already been removed",
elog(DEBUG2, "removed temporary WAL segment \"%s\"", path);

Regards,
Bharath Rupireddy.

Attachments:

v1-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v1-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From b785f599dda14883e5dbc5669e2fc761485b68f4 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Mon, 28 Feb 2022 14:50:00 +0000
Subject: [PATCH v1] Use "WAL segment" instead of "log segment"

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment"
suits well here and it is consistently used across the other
user-facing messages.
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  2 +-
 7 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 35029cf97d..a79077c0c8 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -843,7 +843,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -857,7 +857,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -898,7 +898,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -917,7 +917,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -942,7 +942,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index f9f212680b..feca14d625 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -2988,7 +2988,7 @@ ReadRecord(XLogReaderState *xlogreader, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3179,13 +3179,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 54d5f20734..86cade75d3 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -985,14 +985,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index ceaff097b9..94b3f0d016 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index 1eb4509fca..a8e8d6f67f 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -837,7 +837,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 41b8f69b8c..67a964ace4 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -346,7 +346,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 2340dc247b..eda661898f 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -765,7 +765,7 @@ usage(void)
 	printf(_("  -e, --end=RECPTR       stop reading at WAL location RECPTR\n"));
 	printf(_("  -f, --follow           keep retrying after reaching end of WAL\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
-- 
2.25.1

#2Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Bharath Rupireddy (#1)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

At Mon, 28 Feb 2022 21:03:07 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

Hi,

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages [1].

Here's a small patch attempting to consistently use the "WAL segment".

Thoughts?

I tend to agree to this. I also see "log record(s)" (without prefixed
by "write-ahead") in many places especially in the documentation. I'm
not sure how we should treat "WAL log", though.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#3Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Kyotaro Horiguchi (#2)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Tue, Mar 1, 2022 at 6:50 AM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:

At Mon, 28 Feb 2022 21:03:07 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

Hi,

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages [1].

Here's a small patch attempting to consistently use the "WAL segment".

Thoughts?

I tend to agree to this.

Thanks for taking a look at it. Here's the CF entry -
https://commitfest.postgresql.org/38/3584/

I also see "log record(s)" (without prefixed
by "write-ahead") in many places especially in the documentation. I'm
not sure how we should treat "WAL log", though.

Yes, but the docs have a glossary term for 'Log record" [1]<glossentry id="glossary-log-record"> <glossterm>Log record</glossterm> <glossdef> <para> Archaic term for a <glossterm linkend="glossary-wal-record">WAL record</glossterm>. </para> </glossdef> </glossentry>. FWIW
attaching docs change as v2-0002 patch. I found another place where
"log records" is being used in pg_waldump.c, I changed that and
attached v2-0001 patch.

Please review the v2 patch set.

[1]: <glossentry id="glossary-log-record"> <glossterm>Log record</glossterm> <glossdef> <para> Archaic term for a <glossterm linkend="glossary-wal-record">WAL record</glossterm>. </para> </glossdef> </glossentry>
<glossentry id="glossary-log-record">
<glossterm>Log record</glossterm>
<glossdef>
<para>
Archaic term for a <glossterm linkend="glossary-wal-record">WAL
record</glossterm>.
</para>
</glossdef>
</glossentry>

Regards,
Bharath Rupireddy.

Attachments:

v2-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/x-patch; name=v2-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 6263c638b3b50b132cb16dd886ee4ab34bf0e9a5 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 2 Mar 2022 05:25:00 +0000
Subject: [PATCH v2] Use "WAL segment" instead of "log segment"

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment"
suits well here and it is consistently used across the other
user-facing messages.
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 35029cf97d..a79077c0c8 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -843,7 +843,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -857,7 +857,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -898,7 +898,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -917,7 +917,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -942,7 +942,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index f9f212680b..feca14d625 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -2988,7 +2988,7 @@ ReadRecord(XLogReaderState *xlogreader, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3179,13 +3179,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 54d5f20734..86cade75d3 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -985,14 +985,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index ceaff097b9..94b3f0d016 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index 1eb4509fca..a8e8d6f67f 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -837,7 +837,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 41b8f69b8c..67a964ace4 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -346,7 +346,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 2340dc247b..d86e1bbeff 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -765,14 +765,14 @@ usage(void)
 	printf(_("  -e, --end=RECPTR       stop reading at WAL location RECPTR\n"));
 	printf(_("  -f, --follow           keep retrying after reaching end of WAL\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
 	printf(_("  -r, --rmgr=RMGR        only show records generated by resource manager RMGR;\n"
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -x, --xid=XID          only show records with transaction ID XID\n"));
-- 
2.25.1

v2-0002-Replace-log-record-with-WAL-record-in-docs.patchapplication/octet-stream; name=v2-0002-Replace-log-record-with-WAL-record-in-docs.patchDownload
From 0b07cced29ef034107d81c3d4e4560f5a4819c98 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 2 Mar 2022 06:08:08 +0000
Subject: [PATCH v2] Replace log record with WAL record in docs

---
 doc/src/sgml/backup.sgml         | 4 ++--
 doc/src/sgml/ref/pg_waldump.sgml | 4 ++--
 doc/src/sgml/wal.sgml            | 6 +++---
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 0d69851bb1..dd8640b092 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -482,8 +482,8 @@ tar -cf backup.tar /usr/local/pgsql/data
   <para>
    At all times, <productname>PostgreSQL</productname> maintains a
    <firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename>
-   subdirectory of the cluster's data directory. The log records
-   every change made to the database's data files.  This log exists
+   subdirectory of the cluster's data directory. The WAL records
+   capture every change made to the database's data files.  This log exists
    primarily for crash-safety purposes: if the system crashes, the
    database can be restored to consistency by <quote>replaying</quote> the
    log entries made since the last checkpoint.  However, the existence
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 5735a161ce..8136244502 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -156,7 +156,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -166,7 +166,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 2bb27a8468..2677996f2a 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -296,12 +296,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -838,7 +838,7 @@
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
-- 
2.25.1

#4Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Bharath Rupireddy (#3)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Wed, Mar 2, 2022 at 11:41 AM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:

Please review the v2 patch set.

Had to rebase. Attaching v3 patch set.

Regards,
Bharath Rupireddy.

Attachments:

v3-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v3-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 54b1370b2092fb12e6ea4832387f4703ad1915b6 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Sat, 26 Mar 2022 05:52:18 +0000
Subject: [PATCH v3] Use "WAL segment" instead of "log segment"

It looks like we use "log segment" in various user-facing messages.
The term "log" can mean server logs as well. The "WAL segment"
suits well here and it is consistently used across the other
user-facing messages.
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index e437c42992..d37bae47f8 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1207,7 +1207,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1221,7 +1221,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1262,7 +1262,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1281,7 +1281,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1306,7 +1306,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 8b22c4e634..f3909213fe 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -2998,7 +2998,7 @@ ReadRecord(XLogReaderState *xlogreader, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3189,13 +3189,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 8c1b8216be..e49f534c5c 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1142,14 +1142,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index ceaff097b9..94b3f0d016 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index 1eb4509fca..a8e8d6f67f 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -837,7 +837,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 41b8f69b8c..67a964ace4 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -346,7 +346,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 9ffe9e55bd..7a57f64722 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -831,7 +831,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -839,7 +839,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.25.1

v3-0002-Replace-log-record-with-WAL-record-in-docs.patchapplication/octet-stream; name=v3-0002-Replace-log-record-with-WAL-record-in-docs.patchDownload
From 0b07cced29ef034107d81c3d4e4560f5a4819c98 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 2 Mar 2022 06:08:08 +0000
Subject: [PATCH v3] Replace log record with WAL record in docs

---
 doc/src/sgml/backup.sgml         | 4 ++--
 doc/src/sgml/ref/pg_waldump.sgml | 4 ++--
 doc/src/sgml/wal.sgml            | 6 +++---
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 0d69851bb1..dd8640b092 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -482,8 +482,8 @@ tar -cf backup.tar /usr/local/pgsql/data
   <para>
    At all times, <productname>PostgreSQL</productname> maintains a
    <firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename>
-   subdirectory of the cluster's data directory. The log records
-   every change made to the database's data files.  This log exists
+   subdirectory of the cluster's data directory. The WAL records
+   capture every change made to the database's data files.  This log exists
    primarily for crash-safety purposes: if the system crashes, the
    database can be restored to consistency by <quote>replaying</quote> the
    log entries made since the last checkpoint.  However, the existence
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 5735a161ce..8136244502 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -156,7 +156,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -166,7 +166,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 2bb27a8468..2677996f2a 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -296,12 +296,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -838,7 +838,7 @@
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
-- 
2.25.1

#5Nathan Bossart
nathandbossart@gmail.com
In reply to: Bharath Rupireddy (#4)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages
    At all times, <productname>PostgreSQL</productname> maintains a
    <firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename>
-   subdirectory of the cluster's data directory. The log records
-   every change made to the database's data files.  This log exists
+   subdirectory of the cluster's data directory. The WAL records
+   capture every change made to the database's data files.  This log exists

I don't think this change really adds anything. The preceding sentence
makes it clear that we are discussing the write-ahead log, and IMO the
change in phrasing ("the log records every change" is changed to "the
records capture every change") subtly changes the meaning of the sentence.

The rest looks good to me.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#6Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Nathan Bossart (#5)
1 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

At Thu, 31 Mar 2022 08:45:56 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in

At all times, <productname>PostgreSQL</productname> maintains a
<firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename>
-   subdirectory of the cluster's data directory. The log records
-   every change made to the database's data files.  This log exists
+   subdirectory of the cluster's data directory. The WAL records
+   capture every change made to the database's data files.  This log exists

I don't think this change really adds anything. The preceding sentence
makes it clear that we are discussing the write-ahead log, and IMO the
change in phrasing ("the log records every change" is changed to "the
records capture every change") subtly changes the meaning of the sentence.

The rest looks good to me.

+1. It is not a composite noun "log records".

The original sentence is "S(The log) V(records) O(every change that is
made to .. files)". The proposed change looks like changing it to
"S(The log records) V(capture) O(every ..files)". In that sense, the
original one seem rather correct to me, since "capture" seems to have
the implication of "write after log..", to me.

I looked though the document and found other use of "log
record|segment". What do you think about the attached?

There're some uncertain point in the change.

      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.

The "logs" means acutally "WAL segment (files)" but the concept of
"segment" is out of focus in the context. So just "file" is used
there. The same change is applied on dezon of places.

-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> are met,

This might be better be "WAL files" instead of just "WAL".

-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> is stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of

I'm not sure which is better, use "WAL" as a collective noun, or "WAL
files" as the cocrete objects.

-   The aim of <acronym>WAL</acronym> is to ensure that the log is
+   The aim of <acronym>WAL</acronym> is to ensure that the WAL record is
    written before database records are altered, but this can be subverted by

This is not a mechanical change. But I think this is correct.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

Attachments:

addition-doc-fix-for-wal-log.txttext/plain; charset=us-asciiDownload
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index dd8640b092..941042f646 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1246,7 +1246,7 @@ SELECT pg_stop_backup();
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1324,8 +1324,8 @@ SELECT pg_stop_backup();
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1651,9 +1651,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1670,11 +1670,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 96f9b3dd70..de0bed2b10 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 2677996f2a..69dd74f4ab 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -322,15 +322,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -340,10 +340,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -496,15 +496,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -543,7 +543,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -613,10 +613,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -649,7 +649,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -675,12 +675,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -723,7 +723,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -815,16 +815,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -833,7 +833,7 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> is stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
@@ -848,7 +848,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -856,7 +856,7 @@
   </para>
 
   <para>
-   The aim of <acronym>WAL</acronym> is to ensure that the log is
+   The aim of <acronym>WAL</acronym> is to ensure that the WAL record is
    written before database records are altered, but this can be subverted by
    disk drives<indexterm><primary>disk drive</primary></indexterm> that falsely report a
    successful write to the kernel,
@@ -864,19 +864,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL record on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -884,7 +884,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
#7Nathan Bossart
nathandbossart@gmail.com
In reply to: Kyotaro Horiguchi (#6)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Fri, Apr 01, 2022 at 10:31:10AM +0900, Kyotaro Horiguchi wrote:

you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
were not archived before the system went down.

The "logs" means acutally "WAL segment (files)" but the concept of
"segment" is out of focus in the context. So just "file" is used
there. The same change is applied on dezon of places.

This change seems reasonable to me.

-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> are met,

This might be better be "WAL files" instead of just "WAL".

+1 for "WAL files"

-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> is stored in the directory
<filename>pg_wal</filename> under the data directory, as a set of

I'm not sure which is better, use "WAL" as a collective noun, or "WAL
files" as the cocrete objects.

My vote is for "WAL files" because it was previously "WAL logs."

-   The aim of <acronym>WAL</acronym> is to ensure that the log is
+   The aim of <acronym>WAL</acronym> is to ensure that the WAL record is
written before database records are altered, but this can be subverted by

This is not a mechanical change. But I think this is correct.

IMO the original wording is fine. I think it is sufficiently clear that
"log" refers to "write-ahead log," and this sentence seems intended to
convey the basic rule of "log before data." However, the rest of the
sentence is a little weird. It's basically saying "the aim of the log is
to ensure that the log is written..." Isn't the aim of the log to record
the database activity? Perhaps we should rewrite it to something like the
following:

A basic rule of WAL is that the log must be written before the database
files are altered, but this can be...

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#8Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#7)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

It's been a few weeks, so I'm marking the commitfest entry as
waiting-on-author.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#9Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Nathan Bossart (#8)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Tue, Apr 26, 2022 at 1:24 AM Nathan Bossart <nathandbossart@gmail.com> wrote:

It's been a few weeks, so I'm marking the commitfest entry as
waiting-on-author.

Thanks. I'm attaching the updated v4 patches (also subsumed Kyotaro
San's patch at [1]/messages/by-id/20220401.103110.1103213854487561781.horikyota.ntt@gmail.com). Please review it further.

[1]: /messages/by-id/20220401.103110.1103213854487561781.horikyota.ntt@gmail.com

Regards,
Bharath Rupireddy.

Attachments:

v4-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v4-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 68d39c9b087e25fc39312d72cca0ed31c1d9fad2 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Mon, 18 Jul 2022 10:27:20 +0000
Subject: [PATCH v4] Use "WAL segment" instead of "log segment"

We are using "log segment" in various user-facing messages, the
term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages.

Author: Bharath Rupireddy
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f3dc4b7797..58f1a32b00 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1209,7 +1209,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1223,7 +1223,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1264,7 +1264,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1283,7 +1283,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1308,7 +1308,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 5d6f1b5e46..306a9f40e9 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3018,7 +3018,7 @@ ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3223,13 +3223,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 0cda22597f..9e3a000768 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1049,14 +1049,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3d37c1fe62..3767466ef3 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index d4772a2965..7adf79eeed 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -788,7 +788,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 07de918358..678e8ebf6b 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -350,7 +350,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6528113628..4eebeadc8c 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -667,7 +667,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -675,7 +675,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.25.1

v4-0002-Replace-log-record-with-WAL-record-in-docs.patchapplication/octet-stream; name=v4-0002-Replace-log-record-with-WAL-record-in-docs.patchDownload
From 654984e6ee83de08a800ac4f49a67b9c2d13fe27 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Mon, 18 Jul 2022 10:48:52 +0000
Subject: [PATCH v4] Replace log record with WAL record in docs

Authors: Kyotaro Horiguchi, Bharath Rupireddy
---
 doc/src/sgml/backup.sgml         | 14 ++++----
 doc/src/sgml/ref/pg_waldump.sgml | 10 +++---
 doc/src/sgml/wal.sgml            | 60 ++++++++++++++++----------------
 3 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 73a774d3d7..cc5ae59ac2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1095,7 +1095,7 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1173,8 +1173,8 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1462,9 +1462,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1481,11 +1481,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 57746d9421..2e2166bb6f 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
@@ -203,7 +203,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -213,7 +213,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 4b6ef283c1..e8bd7ffecf 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -296,12 +296,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -322,15 +322,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -340,10 +340,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -496,15 +496,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -543,7 +543,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -613,10 +613,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -649,7 +649,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -675,12 +675,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -723,7 +723,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -827,16 +827,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> files are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   files as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -845,12 +845,12 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> files are stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
@@ -860,7 +860,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -876,19 +876,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL record on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -896,7 +896,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
-- 
2.25.1

#10Nathan Bossart
nathandbossart@gmail.com
In reply to: Bharath Rupireddy (#9)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

Overall, these patches look reasonable.

On Mon, Jul 18, 2022 at 04:24:12PM +0530, Bharath Rupireddy wrote:

record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL record on the first page modification after a checkpoint (assuming
<xref linkend="guc-full-page-writes"/> is not disabled), all pages
changed since the checkpoint will be restored to a consistent
state.

nitpick: I would remove the word "record" in this change.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#11Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Nathan Bossart (#10)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Tue, Jul 19, 2022 at 12:00 AM Nathan Bossart
<nathandbossart@gmail.com> wrote:

Overall, these patches look reasonable.

On Mon, Jul 18, 2022 at 04:24:12PM +0530, Bharath Rupireddy wrote:

record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL record on the first page modification after a checkpoint (assuming
<xref linkend="guc-full-page-writes"/> is not disabled), all pages
changed since the checkpoint will be restored to a consistent
state.

nitpick: I would remove the word "record" in this change.

Done. PSA v5 patch set.

Regards,
Bharath Rupireddy.

Attachments:

v5-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v5-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 68d39c9b087e25fc39312d72cca0ed31c1d9fad2 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Mon, 18 Jul 2022 10:27:20 +0000
Subject: [PATCH v5] Use "WAL segment" instead of "log segment"

We are using "log segment" in various user-facing messages, the
term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages.

Author: Bharath Rupireddy
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f3dc4b7797..58f1a32b00 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1209,7 +1209,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1223,7 +1223,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1264,7 +1264,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1283,7 +1283,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1308,7 +1308,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 5d6f1b5e46..306a9f40e9 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3018,7 +3018,7 @@ ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3223,13 +3223,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 0cda22597f..9e3a000768 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1049,14 +1049,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3d37c1fe62..3767466ef3 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index d4772a2965..7adf79eeed 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -788,7 +788,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 07de918358..678e8ebf6b 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -350,7 +350,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6528113628..4eebeadc8c 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -667,7 +667,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -675,7 +675,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.25.1

v5-0002-Replace-log-record-with-WAL-record-in-docs.patchapplication/octet-stream; name=v5-0002-Replace-log-record-with-WAL-record-in-docs.patchDownload
From 8657cf222102cd6f15ee773c6301ed41be2831eb Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Tue, 19 Jul 2022 09:12:12 +0000
Subject: [PATCH v5] Replace log record with WAL record in docs

Authors: Kyotaro Horiguchi, Bharath Rupireddy
---
 doc/src/sgml/backup.sgml         | 14 ++++----
 doc/src/sgml/ref/pg_waldump.sgml | 10 +++---
 doc/src/sgml/wal.sgml            | 60 ++++++++++++++++----------------
 3 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 73a774d3d7..cc5ae59ac2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1095,7 +1095,7 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1173,8 +1173,8 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1462,9 +1462,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1481,11 +1481,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 57746d9421..2e2166bb6f 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
@@ -203,7 +203,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -213,7 +213,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 4b6ef283c1..315ce5f8bd 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -296,12 +296,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -322,15 +322,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -340,10 +340,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -496,15 +496,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -543,7 +543,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -613,10 +613,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -649,7 +649,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -675,12 +675,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -723,7 +723,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -827,16 +827,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> files are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   files as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -845,12 +845,12 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> files are stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
@@ -860,7 +860,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -876,19 +876,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -896,7 +896,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
-- 
2.25.1

#12Nathan Bossart
nathandbossart@gmail.com
In reply to: Bharath Rupireddy (#11)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Tue, Jul 19, 2022 at 02:43:59PM +0530, Bharath Rupireddy wrote:

Done. PSA v5 patch set.

LGTM. I've marked this as ready-for-committer.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#13Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Nathan Bossart (#12)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

At Tue, 19 Jul 2022 09:58:28 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in

On Tue, Jul 19, 2022 at 02:43:59PM +0530, Bharath Rupireddy wrote:

Done. PSA v5 patch set.

LGTM. I've marked this as ready-for-committer.

I find the following sentense in config.sgml. "Specifies the minimum
size of past log file segments kept in the pg_wal directory"

postgresql.conf.sample contains "logfile segment" in a few lines.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#14Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Kyotaro Horiguchi (#13)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Wed, Jul 20, 2022 at 6:55 AM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:

At Tue, 19 Jul 2022 09:58:28 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in

On Tue, Jul 19, 2022 at 02:43:59PM +0530, Bharath Rupireddy wrote:

Done. PSA v5 patch set.

LGTM. I've marked this as ready-for-committer.

I find the following sentense in config.sgml. "Specifies the minimum
size of past log file segments kept in the pg_wal directory"

postgresql.conf.sample contains "logfile segment" in a few lines.

Done. PSA v6 patch set.

Regards,
Bharath Rupireddy.

Attachments:

v6-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v6-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 6094c80f28c64904ede361672521be6b86993adb Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 20 Jul 2022 04:25:10 +0000
Subject: [PATCH v6] Use "WAL segment" instead of "log segment"

We are using "log segment" in various user-facing messages, the
term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages.

Author: Bharath Rupireddy
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f3dc4b7797..58f1a32b00 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1209,7 +1209,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1223,7 +1223,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1264,7 +1264,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1283,7 +1283,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1308,7 +1308,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 5d6f1b5e46..306a9f40e9 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3018,7 +3018,7 @@ ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3223,13 +3223,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 0cda22597f..9e3a000768 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1049,14 +1049,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3d37c1fe62..3767466ef3 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index d4772a2965..7adf79eeed 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -788,7 +788,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 07de918358..678e8ebf6b 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -350,7 +350,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6528113628..4eebeadc8c 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -667,7 +667,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -675,7 +675,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.25.1

v6-0002-Consistently-use-WAL-file-s-in-docs.patchapplication/octet-stream; name=v6-0002-Consistently-use-WAL-file-s-in-docs.patchDownload
From 2a89079004c0f85040bd3d504874bff1e9e34f39 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 20 Jul 2022 04:29:46 +0000
Subject: [PATCH v6] Consistently use "WAL file(s)" in docs

Authors: Kyotaro Horiguchi, Bharath Rupireddy
---
 doc/src/sgml/backup.sgml         | 14 ++++----
 doc/src/sgml/config.sgml         |  4 +--
 doc/src/sgml/ref/pg_waldump.sgml | 10 +++---
 doc/src/sgml/wal.sgml            | 60 ++++++++++++++++----------------
 4 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 73a774d3d7..cc5ae59ac2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1095,7 +1095,7 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1173,8 +1173,8 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1462,9 +1462,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1481,11 +1481,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 37fd80388c..8275e557ef 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4228,7 +4228,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"'  # Windows
        </term>
        <listitem>
        <para>
-        Specifies the minimum size of past log file segments kept in the
+        Specifies the minimum size of past WAL files kept in the
         <filename>pg_wal</filename>
         directory, in case a standby server needs to fetch them for streaming
         replication. If a standby
@@ -4821,7 +4821,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         needs to control the amount of time to wait for new WAL data to be
         available. For example, in archive recovery, it is possible to
         make the recovery more responsive in the detection of a new WAL
-        log file by reducing the value of this parameter. On a system with
+        file by reducing the value of this parameter. On a system with
         low WAL activity, increasing it reduces the amount of requests necessary
         to access WAL archives, something useful for example in cloud
         environments where the number of times an infrastructure is accessed
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 57746d9421..2e2166bb6f 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
@@ -203,7 +203,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -213,7 +213,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 01f7379ebb..30842c0396 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -297,12 +297,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -323,15 +323,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -341,10 +341,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -497,15 +497,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -544,7 +544,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -614,10 +614,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -650,7 +650,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -676,12 +676,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -724,7 +724,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -828,16 +828,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> files are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   files as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -846,12 +846,12 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> files are stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
@@ -861,7 +861,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -877,19 +877,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -897,7 +897,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
-- 
2.25.1

#15Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Bharath Rupireddy (#14)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

At Wed, 20 Jul 2022 10:02:22 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

Done. PSA v6 patch set.

Thanks!

-        Specifies the minimum size of past log file segments kept in the
+        Specifies the minimum size of past WAL files kept in the
-        log file by reducing the value of this parameter. On a system with
+        file by reducing the value of this parameter. On a system with

Looks fine. And postgresq.conf.sample has the following lines:

#archive_library = '' # library to use to archive a logfile segment

#archive_command = '' # command to use to archive a logfile segment

#archive_timeout = 0 # force a logfile segment switch after this

#restore_command = '' # command to use to restore an archived logfile segment

Aren't they need the same fix?

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#16Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Kyotaro Horiguchi (#15)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Wed, Jul 20, 2022 at 12:55 PM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:

At Wed, 20 Jul 2022 10:02:22 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

Done. PSA v6 patch set.

Thanks!

-        Specifies the minimum size of past log file segments kept in the
+        Specifies the minimum size of past WAL files kept in the
-        log file by reducing the value of this parameter. On a system with
+        file by reducing the value of this parameter. On a system with

Looks fine. And postgresq.conf.sample has the following lines:

#archive_library = '' # library to use to archive a logfile segment

#archive_command = '' # command to use to archive a logfile segment

#archive_timeout = 0 # force a logfile segment switch after this

#restore_command = '' # command to use to restore an archived logfile segment

Aren't they need the same fix?

Indeed. Thanks. Now, they are in sync with their peers in .conf.sample
file as well as description in guc.c.

PSA v7 patch set.

Regards,
Bharath Rupireddy.

Attachments:

v7-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v7-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From 6094c80f28c64904ede361672521be6b86993adb Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 20 Jul 2022 04:25:10 +0000
Subject: [PATCH v7] Use "WAL segment" instead of "log segment"

We are using "log segment" in various user-facing messages, the
term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages.

Author: Bharath Rupireddy
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f3dc4b7797..58f1a32b00 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1209,7 +1209,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1223,7 +1223,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1264,7 +1264,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1283,7 +1283,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1308,7 +1308,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 5d6f1b5e46..306a9f40e9 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3018,7 +3018,7 @@ ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3223,13 +3223,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 0cda22597f..9e3a000768 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1049,14 +1049,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3d37c1fe62..3767466ef3 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index d4772a2965..7adf79eeed 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -788,7 +788,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 07de918358..678e8ebf6b 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -350,7 +350,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6528113628..4eebeadc8c 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -667,7 +667,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -675,7 +675,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.25.1

v7-0002-Consistently-use-WAL-file-s-in-docs.patchapplication/octet-stream; name=v7-0002-Consistently-use-WAL-file-s-in-docs.patchDownload
From 0e662610e1335a16529d9da6bb45cd9d410faa45 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Wed, 20 Jul 2022 11:53:03 +0000
Subject: [PATCH v7] Consistently use "WAL file(s)" in docs

Authors: Kyotaro Horiguchi, Bharath Rupireddy
---
 doc/src/sgml/backup.sgml                      | 14 ++---
 doc/src/sgml/config.sgml                      |  4 +-
 doc/src/sgml/ref/pg_waldump.sgml              | 10 ++--
 doc/src/sgml/wal.sgml                         | 60 +++++++++----------
 src/backend/utils/misc/postgresql.conf.sample |  8 +--
 5 files changed, 48 insertions(+), 48 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 73a774d3d7..cc5ae59ac2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1095,7 +1095,7 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1173,8 +1173,8 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1462,9 +1462,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1481,11 +1481,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 37fd80388c..8275e557ef 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4228,7 +4228,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"'  # Windows
        </term>
        <listitem>
        <para>
-        Specifies the minimum size of past log file segments kept in the
+        Specifies the minimum size of past WAL files kept in the
         <filename>pg_wal</filename>
         directory, in case a standby server needs to fetch them for streaming
         replication. If a standby
@@ -4821,7 +4821,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         needs to control the amount of time to wait for new WAL data to be
         available. For example, in archive recovery, it is possible to
         make the recovery more responsive in the detection of a new WAL
-        log file by reducing the value of this parameter. On a system with
+        file by reducing the value of this parameter. On a system with
         low WAL activity, increasing it reduces the amount of requests necessary
         to access WAL archives, something useful for example in cloud
         environments where the number of times an infrastructure is accessed
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 57746d9421..2e2166bb6f 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
@@ -203,7 +203,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -213,7 +213,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 01f7379ebb..30842c0396 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -297,12 +297,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -323,15 +323,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -341,10 +341,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -497,15 +497,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -544,7 +544,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -614,10 +614,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -650,7 +650,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -676,12 +676,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -724,7 +724,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -828,16 +828,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> files are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   files as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -846,12 +846,12 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> files are stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
@@ -861,7 +861,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -877,19 +877,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -897,7 +897,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index b4bc06e5f5..6bb37cbecf 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -251,21 +251,21 @@
 
 #archive_mode = off		# enables archiving; off, on, or always
 				# (change requires restart)
-#archive_library = ''		# library to use to archive a logfile segment
+#archive_library = ''		# library to use to archive a WAL file
 				# (empty string indicates archive_command should
 				# be used)
-#archive_command = ''		# command to use to archive a logfile segment
+#archive_command = ''		# command to use to archive a WAL file
 				# placeholders: %p = path of file to archive
 				#               %f = file name only
 				# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
-#archive_timeout = 0		# force a logfile segment switch after this
+#archive_timeout = 0		# force a WAL file switch after this
 				# number of seconds; 0 disables
 
 # - Archive Recovery -
 
 # These are only used in recovery mode.
 
-#restore_command = ''		# command to use to restore an archived logfile segment
+#restore_command = ''		# command to use to restore an archived WAL file
 				# placeholders: %p = path of file to restore
 				#               %f = file name only
 				# e.g. 'cp /mnt/server/archivedir/%f %p'
-- 
2.25.1

#17Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Bharath Rupireddy (#16)
1 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

At Wed, 20 Jul 2022 17:25:33 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

On Wed, Jul 20, 2022 at 12:55 PM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:
PSA v7 patch set.

Thanks. Looks perfect, but (sorry..) in the final checking, I found
"log archive" in the doc. If you agree to it please merge the
attached (or refined one) and I'd call it a day.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

Attachments:

fix_log_WAL.patchtext/x-patch; charset=us-asciiDownload
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index c0b89a3c01..e5344eb277 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -2659,7 +2659,7 @@ psql "dbname=postgres replication=database" -c "IDENTIFY_SYSTEM;"
          <listitem>
           <para>
            If set to true, the backup will wait until the last required WAL
-           segment has been archived, or emit a warning if log archiving is
+           segment has been archived, or emit a warning if WAL archiving is
            not enabled. If false, the backup will neither wait nor warn,
            leaving the client responsible for ensuring the required log is
            available. The default is true.
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index 56ac7b754b..e50f00afa8 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -318,7 +318,7 @@ PostgreSQL documentation
         backup. This will include all write-ahead logs generated during
         the backup. Unless the method <literal>none</literal> is specified,
         it is possible to start a postmaster in the target
-        directory without the need to consult the log archive, thus
+        directory without the need to consult the WAL archive, thus
         making the output a completely standalone backup.
        </para>
        <para>
#18Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Kyotaro Horiguchi (#17)
2 attachment(s)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

On Thu, Jul 21, 2022 at 9:50 AM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:

At Wed, 20 Jul 2022 17:25:33 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in

On Wed, Jul 20, 2022 at 12:55 PM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:
PSA v7 patch set.

Thanks. Looks perfect, but (sorry..) in the final checking, I found
"log archive" in the doc. If you agree to it please merge the
attached (or refined one) and I'd call it a day.

Merged. PSA v8 patch set.

Regards,
Bharath Rupireddy.

Attachments:

v8-0001-Use-WAL-segment-instead-of-log-segment.patchapplication/octet-stream; name=v8-0001-Use-WAL-segment-instead-of-log-segment.patchDownload
From ec8879007aac9a5b81abf6c627acc7d65c178944 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Sat, 23 Jul 2022 09:36:56 +0000
Subject: [PATCH v8] Use "WAL segment" instead of "log segment"

We are using "log segment" in various user-facing messages, the
term "log" can mean server logs as well. The "WAL segment" suits
well here and it is consistently used across the other user-facing
messages.

Author: Bharath Rupireddy
---
 src/backend/access/transam/xlogreader.c   | 10 +++++-----
 src/backend/access/transam/xlogrecovery.c |  6 +++---
 src/backend/access/transam/xlogutils.c    |  4 ++--
 src/backend/replication/walreceiver.c     |  6 +++---
 src/bin/pg_resetwal/pg_resetwal.c         |  2 +-
 src/bin/pg_upgrade/controldata.c          |  2 +-
 src/bin/pg_waldump/pg_waldump.c           |  4 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f3dc4b7797..58f1a32b00 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1209,7 +1209,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid magic number %04X in log segment %s, offset %u",
+							  "invalid magic number %04X in WAL segment %s, offset %u",
 							  hdr->xlp_magic,
 							  fname,
 							  offset);
@@ -1223,7 +1223,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1264,7 +1264,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 
 		/* hmm, first page of file doesn't have a long header? */
 		report_invalid_record(state,
-							  "invalid info bits %04X in log segment %s, offset %u",
+							  "invalid info bits %04X in WAL segment %s, offset %u",
 							  hdr->xlp_info,
 							  fname,
 							  offset);
@@ -1283,7 +1283,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 		XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 		report_invalid_record(state,
-							  "unexpected pageaddr %X/%X in log segment %s, offset %u",
+							  "unexpected pageaddr %X/%X in WAL segment %s, offset %u",
 							  LSN_FORMAT_ARGS(hdr->xlp_pageaddr),
 							  fname,
 							  offset);
@@ -1308,7 +1308,7 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
 			XLogFileName(fname, state->seg.ws_tli, segno, state->segcxt.ws_segsize);
 
 			report_invalid_record(state,
-								  "out-of-sequence timeline ID %u (after %u) in log segment %s, offset %u",
+								  "out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u",
 								  hdr->xlp_tli,
 								  state->latestPageTLI,
 								  fname,
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 5d6f1b5e46..306a9f40e9 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3018,7 +3018,7 @@ ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,
 			XLogFileName(fname, xlogreader->seg.ws_tli, segno,
 						 wal_segment_size);
 			ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),
-					(errmsg("unexpected timeline ID %u in log segment %s, offset %u",
+					(errmsg("unexpected timeline ID %u in WAL segment %s, offset %u",
 							xlogreader->latestPageTLI,
 							fname,
 							offset)));
@@ -3223,13 +3223,13 @@ retry:
 			errno = save_errno;
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode_for_file_access(),
-					 errmsg("could not read from log segment %s, offset %u: %m",
+					 errmsg("could not read from WAL segment %s, offset %u: %m",
 							fname, readOff)));
 		}
 		else
 			ereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),
 					(errcode(ERRCODE_DATA_CORRUPTED),
-					 errmsg("could not read from log segment %s, offset %u: read %d of %zu",
+					 errmsg("could not read from WAL segment %s, offset %u: read %d of %zu",
 							fname, readOff, r, (Size) XLOG_BLCKSZ)));
 		goto next_record_is_invalid;
 	}
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 0cda22597f..9e3a000768 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -1049,14 +1049,14 @@ WALReadRaiseError(WALReadError *errinfo)
 		errno = errinfo->wre_errno;
 		ereport(ERROR,
 				(errcode_for_file_access(),
-				 errmsg("could not read from log segment %s, offset %d: %m",
+				 errmsg("could not read from WAL segment %s, offset %d: %m",
 						fname, errinfo->wre_off)));
 	}
 	else if (errinfo->wre_read == 0)
 	{
 		ereport(ERROR,
 				(errcode(ERRCODE_DATA_CORRUPTED),
-				 errmsg("could not read from log segment %s, offset %d: read %d of %d",
+				 errmsg("could not read from WAL segment %s, offset %d: read %d of %d",
 						fname, errinfo->wre_off, errinfo->wre_read,
 						errinfo->wre_req)));
 	}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 3d37c1fe62..3767466ef3 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -616,7 +616,7 @@ WalReceiverMain(void)
 			if (close(recvFile) != 0)
 				ereport(PANIC,
 						(errcode_for_file_access(),
-						 errmsg("could not close log segment %s: %m",
+						 errmsg("could not close WAL segment %s: %m",
 								xlogfname)));
 
 			/*
@@ -930,7 +930,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
 			errno = save_errno;
 			ereport(PANIC,
 					(errcode_for_file_access(),
-					 errmsg("could not write to log segment %s "
+					 errmsg("could not write to WAL segment %s "
 							"at offset %u, length %lu: %m",
 							xlogfname, startoff, (unsigned long) segbytes)));
 		}
@@ -1042,7 +1042,7 @@ XLogWalRcvClose(XLogRecPtr recptr, TimeLineID tli)
 	if (close(recvFile) != 0)
 		ereport(PANIC,
 				(errcode_for_file_access(),
-				 errmsg("could not close log segment %s: %m",
+				 errmsg("could not close WAL segment %s: %m",
 						xlogfname)));
 
 	/*
diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c
index d4772a2965..7adf79eeed 100644
--- a/src/bin/pg_resetwal/pg_resetwal.c
+++ b/src/bin/pg_resetwal/pg_resetwal.c
@@ -788,7 +788,7 @@ PrintNewControlValues(void)
 
 	XLogFileName(fname, ControlFile.checkPointCopy.ThisTimeLineID,
 				 newXlogSegNo, WalSegSz);
-	printf(_("First log segment after reset:        %s\n"), fname);
+	printf(_("First WAL segment after reset:        %s\n"), fname);
 
 	if (set_mxid != 0)
 	{
diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c
index 07de918358..678e8ebf6b 100644
--- a/src/bin/pg_upgrade/controldata.c
+++ b/src/bin/pg_upgrade/controldata.c
@@ -350,7 +350,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)
 			cluster->controldata.chkpnt_nxtmxoff = str2uint(p);
 			got_mxoff = true;
 		}
-		else if ((p = strstr(bufin, "First log segment after reset:")) != NULL)
+		else if ((p = strstr(bufin, "First WAL segment after reset:")) != NULL)
 		{
 			/* Skip the colon and any whitespace after it */
 			p = strchr(p, ':');
diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c
index 6528113628..4eebeadc8c 100644
--- a/src/bin/pg_waldump/pg_waldump.c
+++ b/src/bin/pg_waldump/pg_waldump.c
@@ -667,7 +667,7 @@ usage(void)
 	printf(_("  -F, --fork=FORK        only show records that modify blocks in fork FORK;\n"
 			 "                         valid names are main, fsm, vm, init\n"));
 	printf(_("  -n, --limit=N          number of records to display\n"));
-	printf(_("  -p, --path=PATH        directory in which to find log segment files or a\n"
+	printf(_("  -p, --path=PATH        directory in which to find WAL segment files or a\n"
 			 "                         directory with a ./pg_wal that contains such files\n"
 			 "                         (default: current directory, ./pg_wal, $PGDATA/pg_wal)\n"));
 	printf(_("  -q, --quiet            do not print any output, except for errors\n"));
@@ -675,7 +675,7 @@ usage(void)
 			 "                         use --rmgr=list to list valid resource manager names\n"));
 	printf(_("  -R, --relation=T/D/R   only show records that modify blocks in relation T/D/R\n"));
 	printf(_("  -s, --start=RECPTR     start reading at WAL location RECPTR\n"));
-	printf(_("  -t, --timeline=TLI     timeline from which to read log records\n"
+	printf(_("  -t, --timeline=TLI     timeline from which to read WAL records\n"
 			 "                         (default: 1 or the value used in STARTSEG)\n"));
 	printf(_("  -V, --version          output version information, then exit\n"));
 	printf(_("  -w, --fullpage         only show records with a full page write\n"));
-- 
2.34.1

v8-0002-Consistently-use-WAL-file-s-in-docs.patchapplication/octet-stream; name=v8-0002-Consistently-use-WAL-file-s-in-docs.patchDownload
From e52b42062e04d026d7efcd9d8bebf03fcf77e8b2 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Date: Sat, 23 Jul 2022 09:55:57 +0000
Subject: [PATCH v8] Consistently use "WAL file(s)" in docs

Authors: Kyotaro Horiguchi, Bharath Rupireddy
---
 doc/src/sgml/backup.sgml                      | 14 ++---
 doc/src/sgml/config.sgml                      |  4 +-
 doc/src/sgml/protocol.sgml                    |  2 +-
 doc/src/sgml/ref/pg_basebackup.sgml           |  2 +-
 doc/src/sgml/ref/pg_waldump.sgml              | 10 ++--
 doc/src/sgml/wal.sgml                         | 60 +++++++++----------
 src/backend/utils/misc/postgresql.conf.sample |  8 +--
 7 files changed, 50 insertions(+), 50 deletions(-)

diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 73a774d3d7..cc5ae59ac2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1095,7 +1095,7 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
      require that you have enough free space on your system to hold two
      copies of your existing database. If you do not have enough space,
      you should at least save the contents of the cluster's <filename>pg_wal</filename>
-     subdirectory, as it might contain logs which
+     subdirectory, as it might contain WAL files which
      were not archived before the system went down.
     </para>
    </listitem>
@@ -1173,8 +1173,8 @@ SELECT * FROM pg_backup_stop(wait_for_archive => true);
     which tells <productname>PostgreSQL</productname> how to retrieve archived
     WAL file segments.  Like the <varname>archive_command</varname>, this is
     a shell command string.  It can contain <literal>%f</literal>, which is
-    replaced by the name of the desired log file, and <literal>%p</literal>,
-    which is replaced by the path name to copy the log file to.
+    replaced by the name of the desired WAL file, and <literal>%p</literal>,
+    which is replaced by the path name to copy the WAL file to.
     (The path name is relative to the current working directory,
     i.e., the cluster's data directory.)
     Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
@@ -1462,9 +1462,9 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
      <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
      commands are WAL-logged with the literal absolute path, and will
      therefore be replayed as tablespace creations with the same
-     absolute path.  This might be undesirable if the log is being
+     absolute path.  This might be undesirable if the WAL is being
      replayed on a different machine.  It can be dangerous even if the
-     log is being replayed on the same machine, but into a new data
+     WAL is being replayed on the same machine, but into a new data
      directory: the replay will still overwrite the contents of the
      original tablespace.  To avoid potential gotchas of this sort,
      the best practice is to take a new base backup after creating or
@@ -1481,11 +1481,11 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
     we might need to fix partially-written disk pages.  Depending on
     your system hardware and software, the risk of partial writes might
     be small enough to ignore, in which case you can significantly
-    reduce the total volume of archived logs by turning off page
+    reduce the total volume of archived WAL files by turning off page
     snapshots using the <xref linkend="guc-full-page-writes"/>
     parameter.  (Read the notes and warnings in <xref linkend="wal"/>
     before you do so.)  Turning off page snapshots does not prevent
-    use of the logs for PITR operations.  An area for future
+    use of the WAL for PITR operations.  An area for future
     development is to compress archived WAL data by removing
     unnecessary page copies even when <varname>full_page_writes</varname> is
     on.  In the meantime, administrators might wish to reduce the number
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e2d728e0c4..4e92c0c5eb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4228,7 +4228,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"'  # Windows
        </term>
        <listitem>
        <para>
-        Specifies the minimum size of past log file segments kept in the
+        Specifies the minimum size of past WAL files kept in the
         <filename>pg_wal</filename>
         directory, in case a standby server needs to fetch them for streaming
         replication. If a standby
@@ -4821,7 +4821,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
         needs to control the amount of time to wait for new WAL data to be
         available. For example, in archive recovery, it is possible to
         make the recovery more responsive in the detection of a new WAL
-        log file by reducing the value of this parameter. On a system with
+        file by reducing the value of this parameter. On a system with
         low WAL activity, increasing it reduces the amount of requests necessary
         to access WAL archives, something useful for example in cloud
         environments where the number of times an infrastructure is accessed
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index c0b89a3c01..e5344eb277 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -2659,7 +2659,7 @@ psql "dbname=postgres replication=database" -c "IDENTIFY_SYSTEM;"
          <listitem>
           <para>
            If set to true, the backup will wait until the last required WAL
-           segment has been archived, or emit a warning if log archiving is
+           segment has been archived, or emit a warning if WAL archiving is
            not enabled. If false, the backup will neither wait nor warn,
            leaving the client responsible for ensuring the required log is
            available. The default is true.
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index 56ac7b754b..e50f00afa8 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -318,7 +318,7 @@ PostgreSQL documentation
         backup. This will include all write-ahead logs generated during
         the backup. Unless the method <literal>none</literal> is specified,
         it is possible to start a postmaster in the target
-        directory without the need to consult the log archive, thus
+        directory without the need to consult the WAL archive, thus
         making the output a completely standalone backup.
        </para>
        <para>
diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml
index 57746d9421..2e2166bb6f 100644
--- a/doc/src/sgml/ref/pg_waldump.sgml
+++ b/doc/src/sgml/ref/pg_waldump.sgml
@@ -53,7 +53,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">startseg</replaceable></term>
       <listitem>
        <para>
-        Start reading at the specified log segment file.  This implicitly determines
+        Start reading at the specified WAL segment file.  This implicitly determines
         the path in which files will be searched for, and the timeline to use.
        </para>
       </listitem>
@@ -63,7 +63,7 @@ PostgreSQL documentation
       <term><replaceable class="parameter">endseg</replaceable></term>
       <listitem>
        <para>
-        Stop after reading the specified log segment file.
+        Stop after reading the specified WAL segment file.
        </para>
       </listitem>
      </varlistentry>
@@ -141,7 +141,7 @@ PostgreSQL documentation
       <term><option>--path=<replaceable>path</replaceable></option></term>
       <listitem>
        <para>
-        Specifies a directory to search for log segment files or a
+        Specifies a directory to search for WAL segment files or a
         directory with a <literal>pg_wal</literal> subdirectory that
         contains such files.  The default is to search in the current
         directory, the <literal>pg_wal</literal> subdirectory of the
@@ -203,7 +203,7 @@ PostgreSQL documentation
       <listitem>
        <para>
         WAL location at which to start reading. The default is to start reading
-        the first valid log record found in the earliest file found.
+        the first valid WAL record found in the earliest file found.
        </para>
       </listitem>
      </varlistentry>
@@ -213,7 +213,7 @@ PostgreSQL documentation
       <term><option>--timeline=<replaceable>timeline</replaceable></option></term>
       <listitem>
        <para>
-        Timeline from which to read log records. The default is to use the
+        Timeline from which to read WAL records. The default is to use the
         value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the
         default is 1.
        </para>
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 01f7379ebb..30842c0396 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -297,12 +297,12 @@
     transaction processing. Briefly, <acronym>WAL</acronym>'s central
     concept is that changes to data files (where tables and indexes
     reside) must be written only after those changes have been logged,
-    that is, after log records describing the changes have been flushed
+    that is, after WAL records describing the changes have been flushed
     to permanent storage. If we follow this procedure, we do not need
     to flush data pages to disk on every transaction commit, because we
     know that in the event of a crash we will be able to recover the
     database using the log: any changes that have not been applied to
-    the data pages can be redone from the log records.  (This is
+    the data pages can be redone from the WAL records.  (This is
     roll-forward recovery, also known as REDO.)
    </para>
 
@@ -323,15 +323,15 @@
 
    <para>
     Using <acronym>WAL</acronym> results in a
-    significantly reduced number of disk writes, because only the log
+    significantly reduced number of disk writes, because only the WAL
     file needs to be flushed to disk to guarantee that a transaction is
     committed, rather than every data file changed by the transaction.
-    The log file is written sequentially,
-    and so the cost of syncing the log is much less than the cost of
+    The WAL file is written sequentially,
+    and so the cost of syncing the WAL is much less than the cost of
     flushing the data pages.  This is especially true for servers
     handling many small transactions touching different parts of the data
     store.  Furthermore, when the server is processing many small concurrent
-    transactions, one <function>fsync</function> of the log file may
+    transactions, one <function>fsync</function> of the WAL file may
     suffice to commit many transactions.
    </para>
 
@@ -341,10 +341,10 @@
     linkend="continuous-archiving"/>.  By archiving the WAL data we can support
     reverting to any time instant covered by the available WAL data:
     we simply install a prior physical backup of the database, and
-    replay the WAL log just as far as the desired time.  What's more,
+    replay the WAL just as far as the desired time.  What's more,
     the physical backup doesn't have to be an instantaneous snapshot
     of the database state &mdash; if it is made over some period of time,
-    then replaying the WAL log for that period will fix any internal
+    then replaying the WAL for that period will fix any internal
     inconsistencies.
    </para>
   </sect1>
@@ -497,15 +497,15 @@
    that the heap and index data files have been updated with all
    information written before that checkpoint.  At checkpoint time, all
    dirty data pages are flushed to disk and a special checkpoint record is
-   written to the log file.  (The change records were previously flushed
+   written to the WAL file.  (The change records were previously flushed
    to the <acronym>WAL</acronym> files.)
    In the event of a crash, the crash recovery procedure looks at the latest
-   checkpoint record to determine the point in the log (known as the redo
+   checkpoint record to determine the point in the WAL (known as the redo
    record) from which it should start the REDO operation.  Any changes made to
    data files before that point are guaranteed to be already on disk.
-   Hence, after a checkpoint, log segments preceding the one containing
+   Hence, after a checkpoint, WAL segments preceding the one containing
    the redo record are no longer needed and can be recycled or removed. (When
-   <acronym>WAL</acronym> archiving is being done, the log segments must be
+   <acronym>WAL</acronym> archiving is being done, the WAL segments must be
    archived before being recycled or removed.)
   </para>
 
@@ -544,7 +544,7 @@
    another factor to consider. To ensure data page consistency,
    the first modification of a data page after each checkpoint results in
    logging the entire page content. In that case,
-   a smaller checkpoint interval increases the volume of output to the WAL log,
+   a smaller checkpoint interval increases the volume of output to the WAL,
    partially negating the goal of using a smaller interval,
    and in any case causing more disk I/O.
   </para>
@@ -614,10 +614,10 @@
   <para>
    The number of WAL segment files in <filename>pg_wal</filename> directory depends on
    <varname>min_wal_size</varname>, <varname>max_wal_size</varname> and
-   the amount of WAL generated in previous checkpoint cycles. When old log
+   the amount of WAL generated in previous checkpoint cycles. When old WAL
    segment files are no longer needed, they are removed or recycled (that is,
    renamed to become future segments in the numbered sequence). If, due to a
-   short-term peak of log output rate, <varname>max_wal_size</varname> is
+   short-term peak of WAL output rate, <varname>max_wal_size</varname> is
    exceeded, the unneeded segment files will be removed until the system
    gets back under this limit. Below that limit, the system recycles enough
    WAL files to cover the estimated need until the next checkpoint, and
@@ -650,7 +650,7 @@
    which are similar to checkpoints in normal operation: the server forces
    all its state to disk, updates the <filename>pg_control</filename> file to
    indicate that the already-processed WAL data need not be scanned again,
-   and then recycles any old log segment files in the <filename>pg_wal</filename>
+   and then recycles any old WAL segment files in the <filename>pg_wal</filename>
    directory.
    Restartpoints can't be performed more frequently than checkpoints on the
    primary because restartpoints can only be performed at checkpoint records.
@@ -676,12 +676,12 @@
    insertion) at a time when an exclusive lock is held on affected
    data pages, so the operation needs to be as fast as possible.  What
    is worse, writing <acronym>WAL</acronym> buffers might also force the
-   creation of a new log segment, which takes even more
+   creation of a new WAL segment, which takes even more
    time. Normally, <acronym>WAL</acronym> buffers should be written
    and flushed by an <function>XLogFlush</function> request, which is
    made, for the most part, at transaction commit time to ensure that
    transaction records are flushed to permanent storage. On systems
-   with high log output, <function>XLogFlush</function> requests might
+   with high WAL output, <function>XLogFlush</function> requests might
    not occur often enough to prevent <function>XLogInsertRecord</function>
    from having to do writes.  On such systems
    one should increase the number of <acronym>WAL</acronym> buffers by
@@ -724,7 +724,7 @@
    <varname>commit_delay</varname>, so this value is recommended as the
    starting point to use when optimizing for a particular workload.  While
    tuning <varname>commit_delay</varname> is particularly useful when the
-   WAL log is stored on high-latency rotating disks, benefits can be
+   WAL is stored on high-latency rotating disks, benefits can be
    significant even on storage media with very fast sync times, such as
    solid-state drives or RAID arrays with a battery-backed write cache;
    but this should definitely be tested against a representative workload.
@@ -828,16 +828,16 @@
   <para>
    <acronym>WAL</acronym> is automatically enabled; no action is
    required from the administrator except ensuring that the
-   disk-space requirements for the <acronym>WAL</acronym> logs are met,
+   disk-space requirements for the <acronym>WAL</acronym> files are met,
    and that any necessary tuning is done (see <xref
    linkend="wal-configuration"/>).
   </para>
 
   <para>
    <acronym>WAL</acronym> records are appended to the <acronym>WAL</acronym>
-   logs as each new record is written. The insert position is described by
+   files as each new record is written. The insert position is described by
    a Log Sequence Number (<acronym>LSN</acronym>) that is a byte offset into
-   the logs, increasing monotonically with each new record.
+   the WAL, increasing monotonically with each new record.
    <acronym>LSN</acronym> values are returned as the datatype
    <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link>. Values can be
    compared to calculate the volume of <acronym>WAL</acronym> data that
@@ -846,12 +846,12 @@
   </para>
 
   <para>
-   <acronym>WAL</acronym> logs are stored in the directory
+   <acronym>WAL</acronym> files are stored in the directory
    <filename>pg_wal</filename> under the data directory, as a set of
    segment files, normally each 16 MB in size (but the size can be changed
    by altering the <option>--wal-segsize</option> <application>initdb</application> option).  Each segment is
    divided into pages, normally 8 kB each (this size can be changed via the
-   <option>--with-wal-blocksize</option> configure option).  The log record headers
+   <option>--with-wal-blocksize</option> configure option).  The WAL record headers
    are described in <filename>access/xlogrecord.h</filename>; the record
    content is dependent on the type of event that is being logged.  Segment
    files are given ever-increasing numbers as names, starting at
@@ -861,7 +861,7 @@
   </para>
 
   <para>
-   It is advantageous if the log is located on a different disk from the
+   It is advantageous if the WAL is located on a different disk from the
    main database files.  This can be achieved by moving the
    <filename>pg_wal</filename> directory to another location (while the server
    is shut down, of course) and creating a symbolic link from the
@@ -877,19 +877,19 @@
    on the disk.  A power failure in such a situation might lead to
    irrecoverable data corruption.  Administrators should try to ensure
    that disks holding <productname>PostgreSQL</productname>'s
-   <acronym>WAL</acronym> log files do not make such false reports.
+   <acronym>WAL</acronym> files do not make such false reports.
    (See <xref linkend="wal-reliability"/>.)
   </para>
 
   <para>
-   After a checkpoint has been made and the log flushed, the
+   After a checkpoint has been made and the WAL flushed, the
    checkpoint's position is saved in the file
    <filename>pg_control</filename>. Therefore, at the start of recovery,
    the server first reads <filename>pg_control</filename> and
    then the checkpoint record; then it performs the REDO operation by
-   scanning forward from the log location indicated in the checkpoint
+   scanning forward from the WAL location indicated in the checkpoint
    record.  Because the entire content of data pages is saved in the
-   log on the first page modification after a checkpoint (assuming
+   WAL on the first page modification after a checkpoint (assuming
    <xref linkend="guc-full-page-writes"/> is not disabled), all pages
    changed since the checkpoint will be restored to a consistent
    state.
@@ -897,7 +897,7 @@
 
   <para>
    To deal with the case where <filename>pg_control</filename> is
-   corrupt, we should support the possibility of scanning existing log
+   corrupt, we should support the possibility of scanning existing WAL
    segments in reverse order &mdash; newest to oldest &mdash; in order to find the
    latest checkpoint.  This has not been implemented yet.
    <filename>pg_control</filename> is small enough (less than one disk page)
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index b4bc06e5f5..6bb37cbecf 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -251,21 +251,21 @@
 
 #archive_mode = off		# enables archiving; off, on, or always
 				# (change requires restart)
-#archive_library = ''		# library to use to archive a logfile segment
+#archive_library = ''		# library to use to archive a WAL file
 				# (empty string indicates archive_command should
 				# be used)
-#archive_command = ''		# command to use to archive a logfile segment
+#archive_command = ''		# command to use to archive a WAL file
 				# placeholders: %p = path of file to archive
 				#               %f = file name only
 				# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
-#archive_timeout = 0		# force a logfile segment switch after this
+#archive_timeout = 0		# force a WAL file switch after this
 				# number of seconds; 0 disables
 
 # - Archive Recovery -
 
 # These are only used in recovery mode.
 
-#restore_command = ''		# command to use to restore an archived logfile segment
+#restore_command = ''		# command to use to restore an archived WAL file
 				# placeholders: %p = path of file to restore
 				#               %f = file name only
 				# e.g. 'cp /mnt/server/archivedir/%f %p'
-- 
2.34.1

#19Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bharath Rupireddy (#18)
Re: Use "WAL segment" instead of "log segment" consistently in user-facing messages

Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:

Merged. PSA v8 patch set.

Pushed, thanks.

regards, tom lane